We have set up a Splunk forwarder to forward the latest logs in the same server, but we are having an issue where there is a huge difference between indexed time and the event time. I can see the delay of almost 17 hours when I use this query to find the indexed time, event time and the delay:
index=myindex sourcetype=mysourcetype host=myhost | eval delay=(_indextime-_time)/60/60 | eval indexed_time=strftime(_indextime, "%+") | table indexed_time, _time, delay
This is what i got from splunkd.log
01-08-2019 11:17:31.607 +0700 INFO TailReader - ...continuing.
01-08-2019 11:17:33.515 +0700 INFO TcpOutputProc - Connected to idx=*heavy_forwarder*, pset=0, reuse=1.
01-08-2019 11:17:46.608 +0700 INFO TailReader - Could not send data to output queue (structuredParsingQueue), retrying...
01-08-2019 11:17:51.609 +0700 INFO TailReader - ...continuing.
01-08-2019 11:17:55.286 +0700 INFO HttpPubSubConnection - Running phone uri=/services/broker/phonehome/connection_*sourcehostip*_*sourcehostname*_*sourcehostmac*
01-08-2019 11:18:06.610 +0700 INFO TailReader - Could not send data to output queue (structuredParsingQueue), retrying...
01-08-2019 11:18:11.610 +0700 INFO TailReader - ...continuing.
01-08-2019 11:18:16.611 +0700 INFO TailReader - Could not send data to output queue (structuredParsingQueue), retrying...
This issue only occurred on this particular host. Hence, I created a limits.conf file to change the thruput limit as suggested here:
link:troubleshootingindexdelay
Unfortunately, that did not resolve the issue as we still get messages in splunkd.log.
Any idea which part should I look into next?
... View more