Getting Data In

Heavy Forwarder blocked indexqueue

willsy
Communicator

Hello, 

Could someone tell me what i am required to do to sort this issue out please?

I have inputs going into my HF however it seems as though my HF index queue is blocked and backing up the rest of my queues.  

05-03-2021 09:25:58.559 +0100 INFO Metrics - group-queue, name=indexqueue, blocked=true, max_size_kb=500, current_size_kb=499, current_size=1307, largest_size=1500, smallest_size=170

i assume (if you assume it makes an ass out of you and me) that i need to change the server.conf max size however i just want to check before i go balls deep on what the repercussions are and what i should change. 

Please note that i am not actually indexing any data and purely forwarding them on to a third party system. 

obviously if i am wrong then please tell me. Any help is greatly appreciated. 


Labels (1)
0 Karma
1 Solution

gcusello
SplunkTrust
SplunkTrust

Hi @willsy,

At first, you should check if the resources that you're using are sufficient for the work that the HF has to do and you can do this using the Monitoring Console, especially see the CPU load.

Then I experimented a queue block in the past, when I had to send a large syslog to a third party and it clocked my HF.

To solve the problem, the Splunk Support hinted two intervenes:

  • use parallel pipelines,
  • reduct the quantity of syslogs.

I don't know if you're sending syslogs so I put my attention on the first issue:

You have to put in in etc/system/local/server.conf of your HF

parallelIngestionPipelines = 2 

Unfortunately, it isn't possible to have a greater values even if you have more than two CPUs.

If you don't solve with this intervene, I hint to open a Case to Splunk Support.

Ciao.

Giuseppe

View solution in original post

gcusello
SplunkTrust
SplunkTrust

Hi @willsy,

At first, you should check if the resources that you're using are sufficient for the work that the HF has to do and you can do this using the Monitoring Console, especially see the CPU load.

Then I experimented a queue block in the past, when I had to send a large syslog to a third party and it clocked my HF.

To solve the problem, the Splunk Support hinted two intervenes:

  • use parallel pipelines,
  • reduct the quantity of syslogs.

I don't know if you're sending syslogs so I put my attention on the first issue:

You have to put in in etc/system/local/server.conf of your HF

parallelIngestionPipelines = 2 

Unfortunately, it isn't possible to have a greater values even if you have more than two CPUs.

If you don't solve with this intervene, I hint to open a Case to Splunk Support.

Ciao.

Giuseppe

willsy
Communicator

hello, thank you so much, this sorted it however the network ramifications was not expected, the data that is usually 200 events an hour went up to 45000 and saturated out license, but its fine. thank you

0 Karma
Got questions? Get answers!

Join the Splunk Community Slack to learn, troubleshoot, and make connections with fellow Splunk practitioners in real time!

Meet up IRL or virtually!

Join Splunk User Groups to connect and learn in-person by region or remotely by topic or industry.

Get Updates on the Splunk Community!

Keep the Learning Going with the New Best of .conf Hub

Hello Splunkers, With .conf26 getting closer, there’s already a lot of excitement building around this year’s ...

Splunk Community Badges!

  Hey everyone! Ready to earn some serious bragging rights in the community? Along with our existing badges ...

How to find the worst searches in your Splunk environment and how to fix them

Everyone knows Splunk is a powerful platform for running searches and doing data analytics. Your ...