All,
I am trying to understand how I can have full queues on a heavy forwarder but have plenty of CPU and RAM available. Is there something I am supposed to do to get Splunkd to use the resources available? Increase queue sizes or tell it to use more cores?
thanks
-Daniel
Hi daniel333, a few things could be going on here, but if you are on Splunk 6.3 or better, you can make use of the multiple processing pipelines capabilities in order to increase throughput, taking advantage of the extra hardware available. More here : http://docs.splunk.com/Documentation/Splunk/6.3.4/Capacity/Parallelization
Essentially set parallelIngestionPipelines in server.conf.
Alternatively, there could be an issue with the upstream indexer. How do your indexer queues look like?
There could also be some network bottleneck preventing the HF from writing to the network in a timely manner.
Please let me know if this answers your question!
Queues are are a flat 0 on my upsteam indexer I have more resources than I know what to do with really at the indexer level.
Heavy forwarder queues see to bounce from fine to terrible all day along. Network utilizaition is currently less than 1% of the link utilized on both ends.
Does parallelingestionPipelines apply to the heavy forwarder?
yup set that on the HF. you'll probably want to set the same settings on the indexer as well if you have the slack hardware.
Slack hardware?
So I set it on a couple of my heavy forwarders, to see the result. Went from 10% CPU to 50-70% CPU usage which is a good thing. But what I am seeing in the DMC is that one of the pipelines is again maxed out. While the other is unused.