All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

You have very little data in your buckets. And comparing bucket sizes from two different environments with different data (especially if there's so little of that data) makes no sense. Normally you'... See more...
You have very little data in your buckets. And comparing bucket sizes from two different environments with different data (especially if there's so little of that data) makes no sense. Normally you'd expect buckets of several dozens or even hundreds of megabytes.
Adding to what's already been said - there is very rarely a legitimate use case for SHOULD_LINEMERGE. Relying on Splunk recognizing something as date to break data stream into events is not a very go... See more...
Adding to what's already been said - there is very rarely a legitimate use case for SHOULD_LINEMERGE. Relying on Splunk recognizing something as date to break data stream into events is not a very good idea. You should rather set a proper LINE_BREAKER.
Depends on what the desired outcome looks like. Since stats produces aggregated results you have to ask yourself what is it you really want. If you just want to add some aggregated value to each resu... See more...
Depends on what the desired outcome looks like. Since stats produces aggregated results you have to ask yourself what is it you really want. If you just want to add some aggregated value to each results row - that's what eventstats is for (be careful with it though because it can be memory-hungry). If you want to get aggregated field values you might use values() or list() as additional aggregation functions.
And you really think this is a common use case? I use windows at home in a VM with a GPU passthrough but I don't say that it's something used at a typical desktop.
So I have got it working 99% I did something like this Index=xxxxxx "Starting iteration" OR "Stopping iteration" | stats earliest(_time) as Start,latest(_time) as Stopped | eval Taken=tostring(... See more...
So I have got it working 99% I did something like this Index=xxxxxx "Starting iteration" OR "Stopping iteration" | stats earliest(_time) as Start,latest(_time) as Stopped | eval Taken=tostring(Stopped-Start) | eval Taken=Taken/60 | eval Time_Taken=(if(Taken>15,"Not Good","Good")) | where Time_Taken="Not Good" | table Start Stopper Time_Taken Now it shows Not Good if over 15 mins The issue is how to set the alert properly - as if I set to check every 15 mins - it may overlap 2 starts - Example Started at 7pm and finished at 7.08pm Alert checks at like 7.25pm for the last 15 mins and it sees 7.08pm at Stopped then 7.15pm Start and maybe finished at 7.24pm - If that make sense to you guru's  
Hi @Vignesh, There is no documented REST API, but the SA-ThreatIntelligence app exposes the alerts/suppressions service to create, read, and update (including disable and enable) suppressions. To de... See more...
Hi @Vignesh, There is no documented REST API, but the SA-ThreatIntelligence app exposes the alerts/suppressions service to create, read, and update (including disable and enable) suppressions. To delete suppressions, use the saved/eventtypes/{name} endpoint (see https://docs.splunk.com/Documentation/Splunk/latest/RESTREF/RESTknowledge#saved.2Feventtypes.2F.7Bname.7D). Search, Start Time, and End Time are joined to create SPL stored as an event type named notable_suppression-{name}, e.g.: `get_notable_index` _time>1737349200 _time<1737522000 Description and status are stored as separate properties. You can confirm this in $SPLUNK_HOME/etc/apps/SA-ThreatIntelligence/local/eventtypes.conf: [notable_suppression-foo] description = bar disabled = true search = `get_notable_index` _time>1737349200 _time<1737522000 Add -d output_mode=json to any of the following examples to change the output from XML to JSON. Create a suppression: Name: foo Description (optional): bar Search: `get_notable_index` Start Time (optional): 1/20/2025 (en-US locale in this example) End Time (optional): 1/22/2025 (en-US locale in this example) Status: Enabled curl -k -u admin:pass -X POST https://splunk:8089/servicesNS/nobody/SA-ThreatIntelligence/alerts/suppressions \ --data-urlencode name=notable_suppression-foo \ --data-urlencode description=bar \ --data-urlencode 'search=`get_notable_index` _time>1737349200 _time<1737522000' \ --data-urlencode disabled=false Read a suppression: curl -k -u admin -X GET https://splunk:8089/servicesNS/nobody/SA-ThreatIntelligence/alerts/suppressions/notable_suppression-foo Modify a suppression: Description: baz Search: `get_notable_index` Start Time (optional): (none) End Time (optional): (none) Status: (unchanged) curl -k -u admin:pass -X POST https://splunk:8089/servicesNS/nobody/SA-ThreatIntelligence/alerts/suppressions/notable_suppression-foo \ --data-urlencode description=baz \ --data-urlencode 'search=`get_notable_index`' Disable a suppression: curl -k -u admin:pass -X POST https://splunk:8089/servicesNS/nobody/SA-ThreatIntelligence/alerts/suppressions/notable_suppression-foo \ --data-urlencode disabled=true Enable a suppression: curl -k -u admin:pass -X POST https://splunk:8089/servicesNS/nobody/SA-ThreatIntelligence/alerts/suppressions/notable_suppression-foo \ --data-urlencode disabled=false Delete a suppression: curl -k -u admin:pass -X DELETE https://splunk:8089/servicesNS/nobody/SA-ThreatIntelligence/saved/eventtypes/notable_suppression-foo  
One comment: Never use table before stats! After table all processing has moved into SH and it cannot utilize parallel processing with stats. If you want remove some fields before stats use always fie... See more...
One comment: Never use table before stats! After table all processing has moved into SH and it cannot utilize parallel processing with stats. If you want remove some fields before stats use always fields instead of table! You will get more performance that way. Of course after stats you processing continues on SH side, but stats use preprocessing part on each indexers at same time and only merging and final stats processing are done on SH side.
Hi @Ste , you have to add to your stats command: values(*) AS * in your case: | table importZeit_uF zbpIdentifier bpKurzName zbpIdentifier_bp status stoerCode | where sto... See more...
Hi @Ste , you have to add to your stats command: values(*) AS * in your case: | table importZeit_uF zbpIdentifier bpKurzName zbpIdentifier_bp status stoerCode | where stoerCode IN ("K02") | stats count as periodCount values(*) AS * by zbpIdentifier | sort -periodCount | head 10 | fields zbpIdentifier zbpIdentifier_bp periodCount importZeit_uF but they are grouped for the zbpIdentifier. Ciao. Giuseppe
Dear experts According to the documentation after stats, I have only the fields left used during stats.  | table importZeit_uF zbpIdentifier bpKurzName zbpIdentifier_bp status stoerCode ... See more...
Dear experts According to the documentation after stats, I have only the fields left used during stats.  | table importZeit_uF zbpIdentifier bpKurzName zbpIdentifier_bp status stoerCode | where stoerCode IN ("K02") | stats count as periodCount by zbpIdentifier | sort -periodCount | head 10 | fields zbpIdentifier zbpIdentifier_bp periodCount importZeit_uF To explain in detail: After table the following fields are available:  importZeit_uF zbpIdentifier bpKurzName zbpIdentifier_bp status stoerCode After stats count there are only  zbpIdentifier periodCount left. Question:  How to change the code above to get the count, and have all fields available as before? Thank you for your support.   
If the data is same as before, but the presentation is different then there is something different in the settings now. Use the btool command (part of the Admin's Little Helper app - a mandatory app... See more...
If the data is same as before, but the presentation is different then there is something different in the settings now. Use the btool command (part of the Admin's Little Helper app - a mandatory app for Splunk Cloud customers, IMO) to review the settings to make sure they are being applied as expected.
Thanks for the response. Everything was migrated over and is exactly this same as before. You would think there would be a toggle to always use highlighted syntax since it's already parsing JSON..
Have you migrated/moved those original props.conf from onprem to cloud? If you still have those somewhere just create an app from those and install it into cloud. Of course you must ensure that those... See more...
Have you migrated/moved those original props.conf from onprem to cloud? If you still have those somewhere just create an app from those and install it into cloud. Of course you must ensure that those have precedence over current configuration in cloud.
Timestamps are extracted before INGEST_EVAL is performed, so you'll need to use the := operator to replace _time. These props should work better than those shown. [QWERTY] # better performance with... See more...
Timestamps are extracted before INGEST_EVAL is performed, so you'll need to use the := operator to replace _time. These props should work better than those shown. [QWERTY] # better performance with LINE_BREAKER SHOULD_LINEMERGE = false # We're not breaking events before a date BREAK_ONLY_BEFORE_DATE = false # Break events after newline and before "~01~" LINE_BREAKER = ([\r\n]+)~\d\d~ MAX_TIMESTAMP_LOOKAHEAD = 43 TIME_FORMAT = %Y%m%d-%H%M%S;%3N # Skip to the second timestamp (after the milliseconds of the first TS) TIME_PREFIX = ;\d{3}~ MAX_DAYS_AGO = 10951 REPORT-1 = some-report-1 REPORT-2 = some-report-2
When I click on the raw log and back out of it it shows up as highlighted. How do I default the sourcetype/source to always show as highlighted? I've messed with the props.conf and can't get it. T... See more...
When I click on the raw log and back out of it it shows up as highlighted. How do I default the sourcetype/source to always show as highlighted? I've messed with the props.conf and can't get it. This only started occur after we migrated from On-Prem Splunk to Splunk Cloud. Before, these logs would automatically show up/parsed in JSON
The WARNING text can be ignored safely.  The rest is normal.
I see that INGEST_EVAL allows for the use of conditionals. Thank you very much, I'll give that a try.
Hi, for Splunk search head component I have copied the configuration files from old Virtual Machine box to New Physical server box .  After which I started the Splunk services in new physical b... See more...
Hi, for Splunk search head component I have copied the configuration files from old Virtual Machine box to New Physical server box .  After which I started the Splunk services in new physical box, the Web UI is not loading up and getting the below message Waiting for web server at https://127.0.0.1:8000 to be available................WARNING: Server Certificate Hostname Validation is disabled. Please see server.conf/[sslConfig]/cliVerifyServerName for details. . Done
Probably you could do this with INGEST_EVAL? Just test if you can do this with "eval ......" in one line adding several those one after one with suitable if etc. If/when you get it working in SPL then... See more...
Probably you could do this with INGEST_EVAL? Just test if you can do this with "eval ......" in one line adding several those one after one with suitable if etc. If/when you get it working in SPL then just copy that into transforms.conf into one INGEST_EVAL expression.
@Eldemallawy  In Splunk, EPS (Events Per Second) is a metric used to measure the rate at which events are ingested into the Splunk indexer. The formula to calculate EPS is relatively straightforward... See more...
@Eldemallawy  In Splunk, EPS (Events Per Second) is a metric used to measure the rate at which events are ingested into the Splunk indexer. The formula to calculate EPS is relatively straightforward: EPS = (Total Number of Events) / (Time Duration in Seconds) To calculate EPS, you need to count the total number of events that were indexed within a specific time duration (usually one second) and then divide that count by the duration in seconds. For example, if you want to calculate the EPS over a 1-minute window (60 seconds) and you have indexed 3,000 events during that time: EPS = 3000 / 60 = 50 events per second. This means you are indexing, on average, 50 events per second during that 1-minute period.
Many thanks @kiran_panchavat, much appreciated. Cheers, Ahmed.