Splunk Enterprise Security

Developing reliable searches dealing with events indexing delay

StefanoA
Explorer

Hello everyone,

I am concerned about single-event-match (e.g. observable-based) searches and the eventual indexing delay events may have.

Would the usage of accelerated DM allow me to just ignore something like the below, whilst still ensuring that such an event will be anyhow taken into account? If so, how?

I read Data Models "faithfully deal with late-arriving events with no upkeep or mitigation required", however I am still concerned on what would happen in a case such as the one depicted in the image I'm uploading, where:

- T0 is the moment when the event happened / was logged (_time)
- T1 is the first moment taken into account by the search (earliest)
- T2 is the moment when the event was indexed (_indextime)
- T3 is the last moment taken into account by the search (latest)

What about, instead, taking a "larger" time frame for earliest / latest and then focus on the queue of events happened between _index_earliest / _index_latest ? Would this ensure that each and every single event is taken into account with such a search? (Splunk suggests "When using index-time based modifiers such as _index_earliest and _index_latest, [...] you must run ...", and although I'm not entirely sure about the performances impacts of doing so while still filtering by _indextime, I think it would still be a good idea to account for an ideal maximum events lag, still big but not too big, e.g. 24h, similar to the one mentioned here https://docs.splunk.com/Documentation/Splunk/9.1.1/Report/Durablesearch#Set_time_lag_for_late-arrivi... , for which the surpassing could generate an alert on its own )

 

Are there different and simpler ways to achieve such mathematic certainty, regardless of the indexing delay? (of course, given that the search isn't skipped)

 

Thank you all

Labels (1)
0 Karma
1 Solution

_JP
Contributor

I'm not sure why my original reply isn't showing up...but it is now located here in a totally different place but under a copy of this post:

 

Re: Developing reliable searches dealing with even... - Splunk Community

View solution in original post

_JP
Contributor

I'm not sure why my original reply isn't showing up...but it is now located here in a totally different place but under a copy of this post:

 

Re: Developing reliable searches dealing with even... - Splunk Community

Get Updates on the Splunk Community!

New in Observability - Improvements to Custom Metrics SLOs, Log Observer Connect & ...

The latest enhancements to the Splunk observability portfolio deliver improved SLO management accuracy, better ...

Improve Data Pipelines Using Splunk Data Management

  Register Now   This Tech Talk will explore the pipeline management offerings Edge Processor and Ingest ...

3-2-1 Go! How Fast Can You Debug Microservices with Observability Cloud?

Register Join this Tech Talk to learn how unique features like Service Centric Views, Tag Spotlight, and ...