Splunk Search

I know the data is in the index, why didn't my scheduled search find all of the events?

Mick
Splunk Employee
Splunk Employee

I have a saved seach setup to check every minute for file changes. I have the start time set for [-1m] to search back 1 minute. I have the schedule set for [BASIC] and run every [minute]. I have the alert condition set to perform [if number of events] [is greater than] [0] to send email to us.

Last night we had a file change but for some reason, we didn't receive the email alerts for all of the events even though they appear when I search [All time]. It appears the 1 minute search is not going back the full minute to catch all the messages.

If I click on the search link in the email we received, it only shows 7 events listed in the email.

If I change the timeframe from [Custom] to [All Time], I will see a total of 13 events which includes the all of the relevant events

1 Solution

Mick
Splunk Employee
Splunk Employee

It's likely that the events you're referring to as missed, were not actually present in the index at the time the search was run. The only reason Splunk wouldn't pick them up, is if they weren't actually there.

You can verify this with the following - <your_search_terms> | convert ctime(_indextime) as IT - and the IT field will tell you when the events were actually written to the index. When you're indexing a high volume of data, or from a lot of different sources, there can be a bit of a lag between an event being produced and Splunk actually writing it to the index.

To account for this, many Customers simply offset their searches a bit, so for 'last minute search' you could start the search 2 or even 3 minutes back and end it 1 or 2 minutes later. For example -

<your_search_terms> startminutesago=3 endminutesago=2

You're still searching over a span of 1 minute, and running every minute means you'll still cover all possible time-ranges, but you're allowing for the lag of getting data into the index.

View solution in original post

Mick
Splunk Employee
Splunk Employee

It's likely that the events you're referring to as missed, were not actually present in the index at the time the search was run. The only reason Splunk wouldn't pick them up, is if they weren't actually there.

You can verify this with the following - <your_search_terms> | convert ctime(_indextime) as IT - and the IT field will tell you when the events were actually written to the index. When you're indexing a high volume of data, or from a lot of different sources, there can be a bit of a lag between an event being produced and Splunk actually writing it to the index.

To account for this, many Customers simply offset their searches a bit, so for 'last minute search' you could start the search 2 or even 3 minutes back and end it 1 or 2 minutes later. For example -

<your_search_terms> startminutesago=3 endminutesago=2

You're still searching over a span of 1 minute, and running every minute means you'll still cover all possible time-ranges, but you're allowing for the lag of getting data into the index.

Get Updates on the Splunk Community!

Splunk APM & RUM | Upcoming Planned Maintenance

There will be planned maintenance of the streaming infrastructure for Splunk APM and Splunk RUM in the coming ...

Part 2: Diving Deeper With AIOps

Getting the Most Out of Event Correlation and Alert Storm Detection in Splunk IT Service Intelligence   Watch ...

User Groups | Upcoming Events!

If by chance you weren't already aware, the Splunk Community is host to numerous User Groups, organized ...