I was reading through the docs and a question came to my mind.
Does Splunk have different notions of time that exists in stream processing products like Flink or Kafka? Flink has event time, ingestion time and processing time for all the events that arrive and uses complex algorithms for handling event time and processing time differences, like watermarks.
From what I see from the docs, Splunk has a single concept of time in the form of timestamps that are added to the events that arrive at the system and ignores the event time, the actual time when the event has been created.
Splunk tries to allocate a useful timestamp to the events. This is configurable so if the events are described correctly, the "event time" (if present in the data) can be used. If all else fails, the time that the event is processed is used. So, splunk doesn't really have a single notion of time, it tries to use the most useful available to it. If there are other timestamps within the data, these can still be extracted, e.g. an event may have a start and an end time within the event, these could be extracted into separate fields for the event and one or other or neither be assigned to the timestamp for the event.