Splunk Enterprise Security

In Splunk Enterprise Security, why am I missing alerts due to time gaps?

CodyQ
Explorer

Question: is there a way to append the index time to the time of an event for alerting purposes?

My system failed to catch an alert because the reporting system went down, and when it started forwarding logs again, I missed several potential alerts because the alert was constructed on a "now minus 1-hour" concept for performance reasons. I realize the current solution is to just expand my search hours, but I was wondering if anyone has any other solutions.

Has anyone ever created a retrospective search that can look for events that should have fired, but haven't?

0 Karma
1 Solution

spayneort
Contributor

You can change your search to have a larger time range, then limit it based on the index time by adding something like _index_earliest=-5min@min to your search. Here is an article that covers this:

https://spl.ninja/2017/06/01/its-about-time-to-change-your-correlation-searches-timing-settings/

View solution in original post

spayneort
Contributor

You can change your search to have a larger time range, then limit it based on the index time by adding something like _index_earliest=-5min@min to your search. Here is an article that covers this:

https://spl.ninja/2017/06/01/its-about-time-to-change-your-correlation-searches-timing-settings/

Get Updates on the Splunk Community!

Earn a $35 Gift Card for Answering our Splunk Admins & App Developer Survey

Survey for Splunk Admins and App Developers is open now! | Earn a $35 gift card!      Hello there,  Splunk ...

Continuing Innovation & New Integrations Unlock Full Stack Observability For Your ...

You’ve probably heard the latest about AppDynamics joining the Splunk Observability portfolio, deepening our ...

Monitoring Amazon Elastic Kubernetes Service (EKS)

As we’ve seen, integrating Kubernetes environments with Splunk Observability Cloud is a quick and easy way to ...