Splunk Search

After upgrade to a Splunk 6.2 indexer cluster, why do searches hang with high memory consumption and we're forced to restart?

rchan11
Explorer

Hi,

We've recently upgraded to a Splunk 6.2 indexer cluster, but we're finding that searches will hang and the system goes unresponsive. We're forced to restart the entire system. Our hardware doesn't spikes, but Memory consumption is high (as normal).

Is there any diagnostic we can run after we've successfully restarting the system to find out what the root cause was?

Thanks,
Ryan

0 Karma

jkat54
SplunkTrust
SplunkTrust

Please run the following searches to find your needle in the haystack:

index=_internal source=*splunkd.log WARN or ERR*

 -or maybe it wont be splunkd.log-

index=_internal source=* WARN or ERR*

You might also consider adding an _index_earliest=-1h to see events that have indexed in the last hour, etc. to help narrow down the search results to exactly when the issue occurred.

0 Karma

ppablo
Retired

Hi @rchan11

Just to clarify for other users, but are you referring to search head clustering or indexer clustering? or both?

0 Karma

rchan11
Explorer

Hi ppablo,

We have 1 search head and 2 clustered indexers.

Thanks
Ryan

0 Karma
Get Updates on the Splunk Community!

Observability | How to Think About Instrumentation Overhead (White Paper)

Novice observability practitioners are often overly obsessed with performance. They might approach ...

Cloud Platform | Get Resiliency in the Cloud Event (Register Now!)

IDC Report: Enterprises Gain Higher Efficiency and Resiliency With Migration to Cloud  Today many enterprises ...

The Great Resilience Quest: 10th Leaderboard Update

The tenth leaderboard update (11.23-12.05) for The Great Resilience Quest is out >> As our brave ...