hi, I can see blocked=true in metrics.log of Splunk heavy forwarder. Blocked Queues are: typingqueue, aggqueue, parsingqueue, indexqueue, splunktcpin. Anyone is having any idea on this issue?
Note: this queue blockage is happening intermittently for individual Heavy forwarders.
This means that the heavy forwarder can't parse the data quick enough.
Easy fix: Get more resources, CPU etc or add a whole extra heavy forwarder to share the load.
Harder fix: Examine all the data the heavy forwarder is processing. Set LINE_BREAKER
value correctly for all large volume source types so that you can set SHOULD_LINEMERGE
to false.
Also be aware that nullQueue
ing events is computationally expensive becuase the event still goes through the full parsing pipeline before it is discarded.
Good luck!
This means that the heavy forwarder can't parse the data quick enough.
Easy fix: Get more resources, CPU etc or add a whole extra heavy forwarder to share the load.
Harder fix: Examine all the data the heavy forwarder is processing. Set LINE_BREAKER
value correctly for all large volume source types so that you can set SHOULD_LINEMERGE
to false.
Also be aware that nullQueue
ing events is computationally expensive becuase the event still goes through the full parsing pipeline before it is discarded.
Good luck!
@manchitmalik
Within the UF you can manage queue size as below in the $SPLUNK/etc/system/local/server.conf file to increase the parsing queue:
[queue=parsingQueue] maxSize = 500 This is the default size
[queue=parsingQueue] maxSize = 10MB A reasonable size if watching a DNS server
[queue=parsingQueue] maxSize = 0 If you are crazy and want to allow unthrottled forwarding. USE WITH CARE