Hi,
I'm running a search query that is returning Postfix log messages that have been logged via syslog with timestamps down to the second, and indexed by a Splunk indexer cluster consisting of two indexers.
I was then trying to combine Postfix log messages pertaining to the same email message using a Splunk 'transaction' command, and then 'table'ing the results. This would sometimes produce different results when ran over the same data (and same, non-current, time period).
When looking at the raw events, I found that they sometimes came out in a different order. Getting Splunk to show 'splunk_server' with the results showed that the events that were changing order were on a different indexer. That is, sometimes the search head would show the events from indexer A before those from indexer B, and other times it would show the results from indexer B before those from indexer A.
Depending on which way around the events were collated, the events would either match the transaction 'startswith' and 'endswith' parameters, or they wouldn't.
I'm wondering how the search head decides to order the results from multiple indexers, especially given that the '_indextime' of the events from the indexer returning the chronologically later log entries was one second later than that of the earlier log entries so, in this case, a sort by '_indextime' would have resolved the problem.
However, I don't want to add a sort by _indextime as I'm thinking that there isn't any way to guarantee that all the events for what would be one 'transaction' wouldn't be indexed within the same second, and hence a sort by _indextime won't resolve the issue.
Thinking that it may be using _indextime, and noticing the note about synchronising time between distributed Splunk servers, I checked that the time is the same between the search head, and the two indexers. The servers are syncd with NTP and the 'date' command showed them having the same time to the second.
The Splunk forwarders are configured to send data to both indexers in a round-robin fashion.
Thanks,
karl...
... View more