We have a server running in Japan timezone. Recently when we did not find logs during a live testing.
Next day we ran the query to calculate delta between indextime and event time --- eval delta=_indextime-_time
It is showing even a negative value.
Please refer to attached screenshot. Since that is not possible in real time that an index gets created even before event occurs, is there an issue because the server is behind UTC?
2019-04-18 13:49:20.992 || 2019-04-18 14:08:13 || 1132.008
2019-04-18 12:50:37.005 || 2019-04-18 14:08:13 || 4655.995
2019-04-18 13:49:21.046 || 2019-04-18 13:49:26 || 4.954
2019-04-18 13:49:21.038 || 2019-04-18 13:49:23 || 1.962
2019-04-18 21:53:45.843 || 2019-04-18 12:53:51 || **** -32394.843 ****
2019-04-18 12:52:04.591 || 2019-04-18 12:52:05 || 0.409
This happens all the times. You will find that not only is the "delay" negative, but it is also a multiple of 60 minutes. This happens because the Splunk Indexer used it's own OS timezone because the event's timestamp does not have one and you did not tell it what to use. We find this all the time in our Health Checks that we do for clients; a good app for tracking all this is Meta Woot!
. The main way to fix this is to use TZ=
in props.conf. It can be exceedingly more complicated than that, which is why people often hire PS to help sort it all out and fix it, but that is the gist of it.
The best is to add the TZ at the end of the timestamp for each event - if possible ; -)
For changing at application level, I will need to evaluate if log4j can accept and interpret "TZ" in its configuration, else we may need to go the "props.conf" route suggested below.
This happens all the times. You will find that not only is the "delay" negative, but it is also a multiple of 60 minutes. This happens because the Splunk Indexer used it's own OS timezone because the event's timestamp does not have one and you did not tell it what to use. We find this all the time in our Health Checks that we do for clients; a good app for tracking all this is Meta Woot!
. The main way to fix this is to use TZ=
in props.conf. It can be exceedingly more complicated than that, which is why people often hire PS to help sort it all out and fix it, but that is the gist of it.
Thanks for the clarification about how timezone issue I am seen is by design. I will check with the team who has control of props.conf and enquire if anyone knows how to engage PS for our account.
This ticket is to understand how splunk indextime calculations occur.
This is NOT a ticket. Tickets are official requests with Splunk Support. This is not Splunk Support and is far from official. While there are some Splunk employees here, the majority are fellow users volunteering to help other users.
Thanks for correcting. I asked this "question" to understand if my interpretation that a negative time delay is coming due to server being in UTC-X hours or there could be some other potential problem, that I need to check on splunk configuration end?
Are you able to provide any thoughts around that?
The _indextime
field contains the time that an event was indexed, expressed in Unix time. The _time is the parsed timestamp from the event based on timezone of the forwarder. If your delay is negative it means, your timestamp is being parsed in future. This indicates incorrect timestamp parsing. For the sourcetype you're getting the data, what's the timestamp parsing configuration that you've setup and does it include timezone consideration?