Hi, I have a scenario where i was getting a lot of
INFO TailingProcessor - Could not send
data to output queue (parsingQueue),
retrying...
errors in splunkd.logs of Universal Forwarders.
After reading documentation http://docs.splunk.com/Documentation/Splunk/6.0.2/Troubleshooting/Troubleshootingeventsindexingdelay
I increased the thruput in Universal Forwarders and the those "Could not send data to output queue" increased instead of decreasing.
Intially i had 2MBps as thruput and i increased that to 5MBps.
So someone can help if increased thruput increased load on Indexer and queues got blocked and hence i have increased "Could not send data to output queue" messages.
If i increase thruput in Universal Forwarders, do i need to increase queue sizes(splunktcpin,typing,parsing,etc) in Indexers???
Hi Ayn,
Image
In above image is plot of count events by time for 3files, In that you can see a sudden dip to 0 events. The events were actaully present in files, but for some reason the not indexed in splunk.
AT same duration i had lot of "Could not send data to output queue" messages and also the instataneous KBps during that timeperiod was like 3MBps and the differnce of indextime to timestamp duration also was elevated. So thought increasing thruput would solve that issue, but no luck.
... View more