Getting Data In

Indexqueue blocked on Heavy Forwarder

cmlombardo
Path Finder

I know there are similar posts about this, but I am not sure on what to do or tweak here.

Messages I am getting are similar to this:
01-05-2024 09:35:07.049 -0800 INFO Metrics - group=queue, ingest_pipe=1, name=indexqueue, blocked=true, max_size_kb=500, current_size_kb=499, current_size=815, largest_size=1764, smallest_size=0

I already set

parallelIngestionPipelines
 = 2

Also, there is no indication of resource exhaustion on these Heavy Forwarders. CPU is constantly below 25% and RAM is low as well.

What else can I check/do/configure to avoid this?
Also, what happens to the data when this happens?

Thank you!

Labels (2)
0 Karma

cmlombardo
Path Finder

Thank you for your comments. I had the feeling this might be a problem upstream but I wanted to make sure.

richgalloway
SplunkTrust
SplunkTrust

Queues become blocked when the corresponding pipeline is too slow to keep up with incoming data.  In this case, the index pipeline is unable to send data out as fast as it's coming in.  Verify the HF's destinations are all up, listening, and reachable.

---
If this reply helps you, Karma would be appreciated.

isoutamo
SplunkTrust
SplunkTrust

Hi

when indexqueue has blocked on HF (or other instances) you should tart to looking from next host which is receiving those events. Quite often the real issue (if there is any issue) are found from it. Just use MC to look how those queues and pipelines are working on it. Usually it’s not an issue, if those queue is locked time by time. 
r. Ismo

Get Updates on the Splunk Community!

Data Management Digest – December 2025

Welcome to the December edition of Data Management Digest! As we continue our journey of data innovation, the ...

Index This | What is broken 80% of the time by February?

December 2025 Edition   Hayyy Splunk Education Enthusiasts and the Eternally Curious!    We’re back with this ...

Unlock Faster Time-to-Value on Edge and Ingest Processor with New SPL2 Pipeline ...

Hello Splunk Community,   We're thrilled to share an exciting update that will help you manage your data more ...