Splunk Search

Time interval for searches

kholleran
Communicator

I have a best practice time question for veteran Splunkers out there. Right now I have a a failed login search that runs every 15 minutes for the last 15 minute interval and alerts out if failed logins on a particular server are greater than 3.

However, if I failed logging in twice at 1:59 and then twice at 2:01, it would be 4 failed logins but would not alert out because the 2:00 run of the search would see 2 failures from 1:45 to 2:00 and the 2:15 search would see 2 failures from 2:00 to 2:15.

So the way I see it is I need to have some overlap, such as search every 15 minutes over the last 20 minutes, but I was wondering how others handle this and if there is a "best practice"

Thanks.

Kevin

Tags (1)
1 Solution

ftk
Motivator

As a best practice when I set up my alerts, I build in a delay to ensure all items have been forwarded and indexed at the indexer so they do not get skipped on the next run. For example if I run a search every 15 minutes, and it runs at 2:00:00, it will look at data from 1:45:00-2:00:00. If an event get's logged at 1:59:50 it might not get forwarded and indexed until 2:00:30 or so, but will get indexed with a 1:59:50 timestamp. This means the next scheduled search running at 2:15:00, looking at events from 2:00:00-2:15:00 will miss this event.

As such I always add a relative time range to my alert searches. For a every 15 minute search for example I do

my search terms earliest=-20m@m latest=-5m@m

If I run this search at 2:00:00 it will look at data from 1:40:00-1:55:00. This gives me a 5 minute buffer to account for forwarding/indexing delays.

Now for a search that looks for an aggregate of events, I think going with an overlap might be the way to go, if you are concerned about missing events. Just make sure you don't make the overlap too big or you might end up with duplicate alerts for the same events.

View solution in original post

0 Karma

ftk
Motivator

As a best practice when I set up my alerts, I build in a delay to ensure all items have been forwarded and indexed at the indexer so they do not get skipped on the next run. For example if I run a search every 15 minutes, and it runs at 2:00:00, it will look at data from 1:45:00-2:00:00. If an event get's logged at 1:59:50 it might not get forwarded and indexed until 2:00:30 or so, but will get indexed with a 1:59:50 timestamp. This means the next scheduled search running at 2:15:00, looking at events from 2:00:00-2:15:00 will miss this event.

As such I always add a relative time range to my alert searches. For a every 15 minute search for example I do

my search terms earliest=-20m@m latest=-5m@m

If I run this search at 2:00:00 it will look at data from 1:40:00-1:55:00. This gives me a 5 minute buffer to account for forwarding/indexing delays.

Now for a search that looks for an aggregate of events, I think going with an overlap might be the way to go, if you are concerned about missing events. Just make sure you don't make the overlap too big or you might end up with duplicate alerts for the same events.

0 Karma
Career Survey
First 500 qualified respondents will receive a $20 gift card! Tell us about your professional Splunk journey.

Can’t make it to .conf25? Join us online!

Get Updates on the Splunk Community!

Can’t Make It to Boston? Stream .conf25 and Learn with Haya Husain

Boston may be buzzing this September with Splunk University and .conf25, but you don’t have to pack a bag to ...

Splunk Lantern’s Guide to The Most Popular .conf25 Sessions

Splunk Lantern is a Splunk customer success center that provides advice from Splunk experts on valuable data ...

Unlock What’s Next: The Splunk Cloud Platform at .conf25

In just a few days, Boston will be buzzing as the Splunk team and thousands of community members come together ...