Getting Data In

Heavy Forwarder blocked indexqueue

willsy
Communicator

Hello, 

Could someone tell me what i am required to do to sort this issue out please?

I have inputs going into my HF however it seems as though my HF index queue is blocked and backing up the rest of my queues.  

05-03-2021 09:25:58.559 +0100 INFO Metrics - group-queue, name=indexqueue, blocked=true, max_size_kb=500, current_size_kb=499, current_size=1307, largest_size=1500, smallest_size=170

i assume (if you assume it makes an ass out of you and me) that i need to change the server.conf max size however i just want to check before i go balls deep on what the repercussions are and what i should change. 

Please note that i am not actually indexing any data and purely forwarding them on to a third party system. 

obviously if i am wrong then please tell me. Any help is greatly appreciated. 


Labels (1)
0 Karma
1 Solution

gcusello
SplunkTrust
SplunkTrust

Hi @willsy,

At first, you should check if the resources that you're using are sufficient for the work that the HF has to do and you can do this using the Monitoring Console, especially see the CPU load.

Then I experimented a queue block in the past, when I had to send a large syslog to a third party and it clocked my HF.

To solve the problem, the Splunk Support hinted two intervenes:

  • use parallel pipelines,
  • reduct the quantity of syslogs.

I don't know if you're sending syslogs so I put my attention on the first issue:

You have to put in in etc/system/local/server.conf of your HF

parallelIngestionPipelines = 2 

Unfortunately, it isn't possible to have a greater values even if you have more than two CPUs.

If you don't solve with this intervene, I hint to open a Case to Splunk Support.

Ciao.

Giuseppe

View solution in original post

gcusello
SplunkTrust
SplunkTrust

Hi @willsy,

At first, you should check if the resources that you're using are sufficient for the work that the HF has to do and you can do this using the Monitoring Console, especially see the CPU load.

Then I experimented a queue block in the past, when I had to send a large syslog to a third party and it clocked my HF.

To solve the problem, the Splunk Support hinted two intervenes:

  • use parallel pipelines,
  • reduct the quantity of syslogs.

I don't know if you're sending syslogs so I put my attention on the first issue:

You have to put in in etc/system/local/server.conf of your HF

parallelIngestionPipelines = 2 

Unfortunately, it isn't possible to have a greater values even if you have more than two CPUs.

If you don't solve with this intervene, I hint to open a Case to Splunk Support.

Ciao.

Giuseppe

willsy
Communicator

hello, thank you so much, this sorted it however the network ramifications was not expected, the data that is usually 200 events an hour went up to 45000 and saturated out license, but its fine. thank you

0 Karma
Get Updates on the Splunk Community!

Enterprise Security Content Update (ESCU) | New Releases

In December, the Splunk Threat Research Team had 1 release of new security content via the Enterprise Security ...

Why am I not seeing the finding in Splunk Enterprise Security Analyst Queue?

(This is the first of a series of 2 blogs). Splunk Enterprise Security is a fantastic tool that offers robust ...

Index This | What are the 12 Days of Splunk-mas?

December 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...