We have 24 indexers in an indexer cluster. Recently the CPU usage is almost 100%, not on all the indexers but it fluctuates between the indexers. Under indexer clustering section, I can see the status going to "Pending" randomly between the indexers for few seconds. It is very continuous and also causes an increase in the number of fixup buckets.
I have restarted the indexer servers manually where I saw high CPU load, but it did not resolve the issue. What would be the best option to fix this and the possible root cause?
Any suggestions would be very helpful.
Thanks in advance!
Hi
There could be several reasons why indexers have high CPU%. It's really hard to make correct guess without seeing what there is happening via MC. I'm suspecting that you have MC on place? If you have, then use it and if You haven't then it's time to setup it now.
Under MC there are several places where you could look in which part that issue could be:
Also there must be good understanding which kind of environment you have to make guesses where that issue could be.
Best option will be if you could as some splunk partner/specialist or Splunk Professional services to look your environment.
r. Ismo
Hi,
I noticed that ingestion latency health check is turning yellow for indexers. Even there is a delay in searching for the data. The index queues shows blocked while checking the internal logs of heavy forwarders.
Could performing a rolling restart of indexers help since it has been a very long time now doing that.
Without any concrete data it's just fortune telling.
Check processes, check i/o saturation, check memory usage. Verify if it's even Splunk that's causing cpu hogging.
Restarting processes blindly will not help much probably without addressing the underlying cause.
Has anything been changed recently? Upgraded?