Getting Data In

What can be done with an indexer in a "hung" state?

ddrillic
Ultra Champion

We reach situations where one out of the ten indexers reaches a "hung" state. All the large queues are filled up for hours and hours and even a restart doesn't always clears this state. Meaning, I bounced it earlier today and after an hour or two it's again in this state. What can be done?

alt text

Tags (1)
0 Karma

harsmarvania57
Ultra Champion

Looks like storage issue or low IOPS on 1st Indexer because when Indexing Queue is full so earlier queue also getting full.

ddrillic
Ultra Champion

Right, removing this indexer from outputs.conf which applies to most of our forwarders...

0 Karma

harsmarvania57
Ultra Champion

Well, I'll first remove from few forwarders who are sending more data to this indexer, in parallel I'll check search activity on this indexer from search head. If more jobs are running on this indexer then it uses more IOPS to read data and might be due to that write performance degraded....

0 Karma

ddrillic
Ultra Champion

Ended up bouncing it when all the queues were at 100% and before a VM crash ; -)

0 Karma
Get Updates on the Splunk Community!

Data Management Digest – December 2025

Welcome to the December edition of Data Management Digest! As we continue our journey of data innovation, the ...

Index This | What is broken 80% of the time by February?

December 2025 Edition   Hayyy Splunk Education Enthusiasts and the Eternally Curious!    We’re back with this ...

Unlock Faster Time-to-Value on Edge and Ingest Processor with New SPL2 Pipeline ...

Hello Splunk Community,   We're thrilled to share an exciting update that will help you manage your data more ...