Activity Feed
- Posted Re: Routing and Forwarding from HWF to two indexers with different filtering on Getting Data In. 11-27-2024 06:44 AM
- Posted Routing and Forwarding from HWF to two indexers with different filtering on Getting Data In. 11-27-2024 02:39 AM
- Posted Re: Migration from Standalone to indexer cluster (2 indexers + 1 SH) on Deployment Architecture. 09-30-2024 03:10 AM
- Posted Re: Migration from Standalone to indexer cluster (2 indexers + 1 SH) on Deployment Architecture. 09-30-2024 03:07 AM
- Posted Re: Migration from Standalone to indexer cluster (2 indexers + 1 SH) on Deployment Architecture. 09-30-2024 02:51 AM
- Posted Re: Migration from Standalone to indexer cluster (2 indexers + 1 SH) on Deployment Architecture. 09-30-2024 01:26 AM
- Karma Re: Migration from Standalone to indexer cluster (2 indexers + 1 SH) for isoutamo. 09-30-2024 01:26 AM
- Posted Re: Migration from Standalone to indexer cluster (2 indexers + 1 SH) on Deployment Architecture. 09-25-2024 04:36 AM
- Karma Re: Migration from Standalone to indexer cluster (2 indexers + 1 SH) for PickleRick. 09-25-2024 04:36 AM
- Posted Migration from Standalone to indexer cluster (2 indexers + 1 SH) on Deployment Architecture. 09-25-2024 01:40 AM
- Posted Re: Ingestion issue from syslog-ng on All Apps and Add-ons. 06-02-2024 03:56 AM
- Posted Ingestion issue from syslog-ng on All Apps and Add-ons. 05-29-2024 12:00 PM
- Posted Re: How to add multiple field in a single search on Splunk Search. 05-23-2024 05:20 AM
- Got Karma for How to disable TLS1.0 and TLS1.1 for webhook port?. 04-02-2024 11:11 AM
- Posted Re: Running splunk on Rocky Linux distro on Installation. 03-06-2024 07:21 AM
- Posted Re: Moment.js not available in splunk 9.1.x+ on Splunk Enterprise. 02-23-2024 08:52 AM
- Posted Re: Microsoft Teams Add-on for Splunk: handling of 404 error on All Apps and Add-ons. 01-12-2024 03:09 AM
- Posted Re: Moment.js not available in splunk 9.1.x+ on Splunk Enterprise. 12-14-2023 06:20 AM
- Posted Re: Why is moment.js not available in splunk 9.1.x+? on Splunk Enterprise. 11-06-2023 09:46 AM
- Posted Re: Why is moment.js not available in splunk 9.1.x+? on Splunk Enterprise. 11-06-2023 08:22 AM
Topics I've Started
Subject | Karma | Author | Latest Post |
---|---|---|---|
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
1 | |||
0 | |||
0 | |||
2 |
11-27-2024
06:44 AM
it's a Fortianalyzer via a custom TCP port. Probably the simplest solution will be configuring a new log forwarding directly on FAZ with filtering. Thanks for the help!
... View more
11-27-2024
02:39 AM
Hi Splunkers, I have an HWF that collects the firewall logs. For cost-saving reasons, some events are filtered, not injected into the indexer. For example, I have props.conf [my_sourcetype] TRANSFORMS-set = dns, external and transforms.conf [dns] REGEX ="dstport=53" DEST_KEY = queue FORMAT = nullQueue [external] REGEX = "to specific external IP range" DEST_KEY = queue FORMAT = nullQueue So my HWF drops those events and the "rest" is ingested to the indexer. (on-prem). - so far so good... One of our operational teams requested that I ingest "their" logs to their Splunk Cloud instance. How I can technically do this? 1. I want to keep all the logs on the on-prem indexer with the filtering 2. I want to ingest events from a specific IP range to Splunk Cloud without filtering BR, Norbert
... View more
Labels
- Labels:
-
heavy forwarder
-
props.conf
-
transforms.conf
09-30-2024
03:10 AM
Our private cloud...
... View more
09-30-2024
03:07 AM
I have like 70% modular input, 25% forwarded, and 5% other (scripted, HEC) Cold and Frozen are on S3. In the last 5 years, we do not need to recall any Frozen data, so this is not really important. ( I will cross this river whenever needed :)) What is important is around 90-120 days of historical, searchable data. So I should move it from the cold back after everything is set up or I just wait until it's outdated, but keep the old server to search them... "And withSmartStore, especially in on-prem, you must ensure and test that you have enough throughput between nodes and S3 storage! " Exactly that is what we are checking now. We can have 10G, but this is just theoretical because dedicated 10G is not possible...
... View more
09-30-2024
02:51 AM
We have S3. Currently, we are using NFS bridge to mount to the server and send the cold buckets there. It planned to change to SmarStore.
... View more
09-30-2024
01:26 AM
Yes, I also agree with @PickleRick, but sadly I need to cook with what I currently have... We have an on-Prem Standalone. OS must be replaced (Centos7) and even the hardware warranty will expire at the end of this year. We have our virtual environment and S3 as well. (I have system architect colleagues, but they are not "Spunk-related" ones.) I have similar plans as you describe. There is only one major difference. I plan to set up 2-3 heavy forwarders and migrate the current inputs. I can do this one by one and fast, without a huge outage. I will set up the new deployment parallelly, and when everything looks okay, I will redirect the HFs to the new deployment. Only the cold buckets are "problematic" now. But we still can keep the old environment without new input, and we can search historical data if needed, once it expires we stop the Standalone... Thank you for the insights!
... View more
09-25-2024
04:36 AM
Thank you for all of this. Every bit of information will be helpful. Believe me, if I could, I would hire a whole team for this. 🙂 But I'm just an average security guy here who "has some clue about Splunk". The wallet is owned by someone else... BR.
... View more
09-25-2024
01:40 AM
Hi fellows, It's time to migrate our good old Standalone Splunk Enterprise to 'Small Enterprise deployment'. I read through tons of docs, unfortunately, I didn't find any step-by-step guide, so I have many questions. May Some of you can help. - The existing server is CentOs7, the new servers will be Ubuntu 22.04. Just before the migration, I plan to upgrade Splunk on it from 9.1.5 to the latest 9.3.1. (it wasn't updated because Centos 7 is not supported by 9.3.1) OR do I set up the new servers with 9.1.5 and upgrade them after the migration? - Our daily volume is 3-400 GB/ day. It will not grow drastically in the medium term. What are your recommended hardware specs for the indexers? Can we use the "Mid-range indexer specification" or go for the "High-performance indexer specification" (as described at https://docs.splunk.com/Documentation/Splunk/9.3.1/Capacity/Referencehardware ) - If I understand correctly I can copy the /etc/apps/ from the old server to the new Search Head, so I will have all the apps, saved searches, etc. But what config files must be modified to get the data for the new indexers? (For forwarders this is clear, but we are using a lot of other inputs (Rest-API, HEC, scripted) - Do I configure our existing server as part of the indexer cluster (3rd member), then when all the replications are done on the new servers remove it from the cluster, or copy the index data to one of the new indexers - rename the buckets ( adding the indexer's unique id) and let the cluster manager do the job? (Do I need a separate Cluster manager or the SH could do the job?) And here comes the big twist... - Currently, we using S3 storage via NAS Bridge for the cold buckets. This solution is not recommended and we are already experiencing fallbacks. So, we planned to change the configuration to use SmartStore. How I can move the current cold buckets there? (a lot of data and because of the NAS Bridge this is very very slow to copy...) Thanks in advance Norbert
... View more
Labels
- Labels:
-
configuration
-
installation
06-02-2024
03:56 AM
Hello, I checked your suggestion, but it did not solve my problem. There are about 200 hosts and about 3% are affected. (on the Syslog server everything works flawlessly.) I have the same type of device logs which are not affected. For me, it's a random issue of the forwarding... Kind regards, Norbert
... View more
05-29-2024
12:00 PM
Hello, Recently we replaced our Syslog server from rsyslog to syslog-ng. We are collecting the network device's log - every source logged its own <IPaddress.log> file. Universal forwarder pushing them to the indexer. Inputs, outputs are ok the data flowing, sourcetype is standard syslog. Everything is working as expected... Except for some sources... I spotted this because the log volume has dropped since the migration. For those, I do not have all of the events in Splunk. I can see the file on the syslog server, let's say there are 5 events per minute. The events are the same - for example, XY port is down - but not identical; the timestamp in the header and the timestamp in the event's message are different. (events are still the same length). So in the log file, there are 5 events/min, but in Splunk, I can see only one event per 5 minutes. The rest are missing... Splunk randomly picks ~10% of the events from the log file (all the extractions are ok for those, there is no special character or something in the "dropped" events...) I feel it is because of similar events - Splunk thinks they are duplicated - but other hand it cannot be, because they are different. Any advice? Should I try to add some crc salt or try to change the sourcetype? BR. Norbert
... View more
Labels
- Labels:
-
troubleshooting
05-23-2024
05:20 AM
May I misunderstand your question, but it's simple:
index= testing field1="write" field2="*@abc.com"
|table field1, field2, ....
if "@abc.com" is a user name and not a domain (as I assume) you do not need to put the wildcard (*) before. If you put it, it will result in every user with @abc.com. Like, user1@abc.com, user2@abc.com...
alternative:
index=testing | stats count by field1 field2 | search field1="write" AND field2"*@abc.com"
Regards,
... View more
03-06-2024
07:21 AM
Hello, Do you have experience with Splunk - Rocky Linux since that? We should migrate our Centos7 soon and one of the candidates is Rocky 9. But the system requirements page https://docs.splunk.com/Documentation/Splunk/9.2.0/Installation/Systemrequirements does not list its kernel version (5.14) anymore. (same for RHEL) I believe it will work, but since I need to migrate a physical production server, I want to reduce the risk as much as I can...
... View more
02-23-2024
08:52 AM
Hello, 1.0.40 working fine for me. - clear your browser cache or use a different browser/incognito mode Note: the configuration changed. You need to set a new API key even for the Overview dashboards....
... View more
01-12-2024
03:09 AM
Hi, I still have no 100% working workaround. I tried to create an Alert on my search head> when the subscription failed, triggering a curl script to disable - re-enable the inputs. I learned two important things there: Order you should disable the webhook, then the subscription input then the call record input. Enable the webhook, and enable the subscription. This will update the subscription, but sometimes doesn't work correctly - in this case, you should clear the KV store first - and the webhook is Exit! So you should disable the webhook again, enable it then enable the call record input. This method above, if you do manually solving the issue all the time. But the second thing: Scripted disable/enable works 50-50%. Seems the call record is not correctly reset by the script. so currently, I have an alert to myself: "Go monkey and reset it manually" 🙂
... View more
12-14-2023
06:20 AM
Hi All, I opened a ticket at Cisco support and they promised that the app will be updated soon. KR
... View more
11-06-2023
09:46 AM
... I tested it if you copy back the moment.js from /opt/splunk/quarantined_files/share/splunk/search_mrsparkle/exposed/js/contrib/ to /opt/splunk/share/splunk/search_mrsparkle/exposed/js/contrib/ The Cisco app works. But this is a hack and Splunk will complain about the file integrity. I think Cisco needs to update its app.
... View more
11-06-2023
08:22 AM
Hello, First of all, I'm far far away from Java scripting. But maybe those who know this could help: Seems to me Splunk removed the moment.js after the update. For me, it's still can be found in /opt/splunk/quarantined_files/share/splunk/search_mrsparkle/exposed/js/contrib/ folder. The new(?) version is supposed to be here: /opt/splunk/share/splunk/search_mrsparkle/exposed/js/contrib/moment/lang/ , but it seems now "localised" Or I'm totally wrong 🙂 Please share your thoughts, I faced the same issue with Cisco Cloud Security App Regards, Norbert
... View more
09-21-2023
07:42 AM
Try: Your search | chart sparkline(count,1min) count by field (more than 1min will generate a shorter sparkline) BR. Norbert
... View more
05-11-2023
12:49 AM
... and finally I found it. I can't explain why, but if I replace the \n with any random character, the do the split it's works. ...| rename "extensionAttribute.value" AS value | search value="*" AND NOT value="No Base*" | eval value=replace(value,"\\n",";") | makemv delim=";" value | mvexpand value | table value
... View more
05-11-2023
12:41 AM
Thanks, first of all I just realised that the separator is not just a backslash, but "\n" - new line. anyway my results are same like with split. makemv do the job too with any delimiter except the \n (\\n,\\\\n or any variation).
... View more
05-10-2023
05:00 AM
Hi,
I have a json field where multiple values listed separated by backslash in raw (space in list view) like this:
"value": "audit_retention_configure\nos_airdrop_disable......\nsystem_settings_wifi_menu_enable\n"
In list view the extraction looks ok, but the whole list shown as a single value. I would like to split it.
I did this:
Mysearch
| rename "extensionAttribute.value" AS value
| search value="*" AND NOT value="No Base*"
| eval values=split(value,"X")
| mvexpand values
| table values
If i set X="\" (unbalanced quotes), or "\\", or " " (space), there is no change in the result, if I set forexample "_", it will split the field by _ like a charm...
Please advise what should I do for
audit_retention_configure nos_airdrop_disable . . . nsystem_settings_wifi_menu_enable
result.
... View more
Labels
- Labels:
-
field extraction
-
JSON
04-17-2023
12:55 AM
Hi, As far as I understand the root problem of this issue that Splunk cannot determinate that your SSL certificate issuer is trustable or not. I play ed a lot with this - I using CA trusted wildcard certificate. And end up this configuration in server.conf: sslVerifyServerCert = true cliVerifyServerName = true serverCert = $SPLUNK_HOME/etc/auth/mycert/cert-with-key.pem (-> servercert+middle-chain cert+root cert+ private key) sslRootCAPath = /etc/ssl/certs/ca-bundle.crt sslRootCAPath is the path of your OS trusted CA bundle. You may need to add Your issuer to this list manually. (the root cert only). Depending by OS, but same process: https://ubuntu.com/server/docs/security-trust-store Now I have no such warning, and seems everything working fine. (May could work if you pointing the your root cert only with sslRootCAPath, but that not tested ) KR.
... View more
03-21-2023
01:44 AM
I tuning a bit, but BIG thanks for the concept! index=_internal source="*metrics.log" eps "group=per_source_thruput" NOT filetracker | eval events=eps*kb/kbps | timechart fixedrange=t span=1m limit=5 sum(events) by series | untable _time series count | sort _time 0 series | streamstats current=t time_window=5m count(eval(count>X)) as rollingHighCount by series | where rollingHighCount=5
... View more
03-20-2023
02:22 PM
I mean with the "strange", that your search returns totally different results than my search 🙂 My goal is: to monitor the number of events per series per minute - the flow itself. On top of that if there is a peak, like 3-4x more events per minute than usual for a longer period (5-10 minutes), raise an alert. This suddenly increased traffic on network devices/firewalls could be a good indicator of an attack or some issue.
... View more
03-20-2023
08:45 AM
Hi, This returns with a strange result. (If I do not remove the | timechart... line from the original search, there is no result.) But, then the result somehow not including my firewalls, where the event per minute is over 100000... Running this every 5 minutes will show the "top list" of the series in that five minutes, but I really looking for the peaks. Running my original search every 3 hours will show the peaks pretty well: But I want to have an email alert in case the events per minute go over a limit. For example, if the "normal" is 100000/min, but then it goes up 250000/min and then back to 100000/min that's OK I do not want to have an alert. But if it stays on the 250000/min level (that is set as X) for more than 5 minutes continuously, I would like to have the alert. ( and I check the behavior later)
... View more