Hi, I have a scenario where i was getting a lot of
INFO TailingProcessor - Could not send
data to output queue (parsingQueue),
errors in splunkd.logs of Universal Forwarders.
After reading documentation http://docs.splunk.com/Documentation/Splunk/6.0.2/Troubleshooting/Troubleshootingeventsindexingdelay
I increased the thruput in Universal Forwarders and the those "Could not send data to output queue" increased instead of decreasing.
Intially i had 2MBps as thruput and i increased that to 5MBps.
So someone can help if increased thruput increased load on Indexer and queues got blocked and hence i have increased "Could not send data to output queue" messages.
If i increase thruput in Universal Forwarders, do i need to increase queue sizes(splunktcpin,typing,parsing,etc) in Indexers???
In above image is plot of count events by time for 3files, In that you can see a sudden dip to 0 events. The events were actaully present in files, but for some reason the not indexed in splunk.
AT same duration i had lot of "Could not send data to output queue" messages and also the instataneous KBps during that timeperiod was like 3MBps and the differnce of indextime to timestamp duration also was elevated. So thought increasing thruput would solve that issue, but no luck.
I don't entirely get what your current situation is - are you actually getting blocked queues on your indexers, or are you just worried about that you might be getting blocked queues?
Indexers will block queues if they can't keep up with the incoming data. How and when that happens depends entirely on your server specifications, how you've configured your Splunk instance and of course how much data your indexer is handling. The reference hardware can handle up to 500GB a day, so if you're using something similar to this in your setup it's very unlikely that you should be worried about increasing (or even removing) the output limits from your forwarders unless you're really sending huge amounts of data - and if you are, limiting isn't a very good idea either because events from this forwarder will be lagging behind.