Splunk Enterprise

CPU consumption is very high in Splunk indexers

shadysplunker
Explorer

We have 24 indexers in an indexer cluster. Recently the CPU usage is almost 100%, not on all the indexers but it fluctuates between the indexers. Under indexer clustering section, I can see the status going to "Pending" randomly between the indexers for few seconds. It is very continuous and also causes an increase in the number of fixup buckets. 

I have restarted the indexer servers manually where I saw high CPU load, but it did not resolve the issue. What would be the best option to fix this and the possible root cause? 

Any suggestions would be very helpful. 

Thanks in advance!

0 Karma

isoutamo
SplunkTrust
SplunkTrust

Hi

There could be several reasons why indexers have high CPU%. It's really hard to make correct guess without seeing what there is happening via MC. I'm suspecting that you have MC on place? If you have, then use it and if  You haven't then it's time to setup it now.

Under MC there are several places where you could look in which part that issue could be:

  • ingesting 
    • which pipeline
  • Disk I/O etc.
  • searching
    • indexer vs. sh side

Also there must be good understanding which kind of environment you have to make guesses where that issue could be.

Best option will be if you could as some splunk partner/specialist or Splunk Professional services to look your environment.

r. Ismo

shadysplunker
Explorer

Hi,

I noticed that ingestion latency health check is turning yellow for indexers. Even there is a delay in searching for the data. The index queues shows blocked while checking the internal logs of heavy forwarders. 

Could performing a rolling restart of indexers help since it has been a very long time now doing that. 

0 Karma

PickleRick
SplunkTrust
SplunkTrust

Without any concrete data it's just fortune telling.

Check processes, check i/o saturation, check memory usage. Verify if it's even Splunk that's causing cpu hogging.

Restarting processes blindly will not help much probably without addressing the underlying cause.

Has anything been changed recently? Upgraded?

0 Karma
Get Updates on the Splunk Community!

.conf24 | Day 0

Hello Splunk Community! My name is Chris, and I'm based in Canberra, Australia's capital, and I travelled for ...

Enhance Security Visibility with Splunk Enterprise Security 7.1 through Threat ...

(view in My Videos)Struggling with alert fatigue, lack of context, and prioritization around security ...

Troubleshooting the OpenTelemetry Collector

  In this tech talk, you’ll learn how to troubleshoot the OpenTelemetry collector - from checking the ...