The issue I'm having is with an index and real time reporting that uses that index. We currently use Rabbit MQ to send JSON messages to a TCP port. The rate is only about 250 messages/second. In Splunk the messages seem to take a few minutes to completely show up. For example, when running a search of the sourcetype for last 15 minutes, the most recent minute (ex. 11:50 am) might show a total of 500 events. When I run the same search a couple of minutes later, that same minute (11:50 am) has grown to 7,000. It appears the index takes a few minutes to catch up. We are trying to run real time reports, so this is causing the reports to be inaccurate.
We have run real time reports for other indexes we create the same way, so we are a little stumped on why this one doesn’t act the same. Any help would be appreciated.
Real-time search don't work the same as Report Searches (standard searches). This is because Real-time searches search through events as they are being streamed to the index while report searches are reading back disk.
So if you have delays in sending or indexing the search will only show events that it received in that window. If more events show up a few seconds/minutes later the wont be show in the real time search. Since those events were delayed you run your search again for that same time period not in real-time the indexer has caught up or finished receiving those delayed events so your count is larger. You will see discrepancies if your Splunk Queues on the Indexers or forwarders are having blocking issues. Also you could have NTP issues on your servers causing timestamp issues.