I have some sources which can have significant (up to several hours) delay from the time of the event itself to the time the source transmits the event to my collector/forwarder/whatever there is.
After my splunk infrastructure receives the event, there is no significant delay (up to 30 seconds usually) from the receive time to indexing time so we won't get into that too deeply.
The problem is that I can either have the ingest time/index time parsed out as _time (which of course completely messes up with any analytics regarding the "real life" events) or the event's internal time field, which prevents me from doing any stats on the actual transmission performance.
I can of course "dig" the _indextime from the events (it's fascinating though that I can't display the _indextime directly but have to do some magic like evaluating other field to the _indextime value) but with dozens of millions events it's quite heavy on the system.
I can of course do very light summary calculations against _time using tstats. But the problem is that tstats works with span only against _time field.