Splunk Search

After upgrade to a Splunk 6.2 indexer cluster, why do searches hang with high memory consumption and we're forced to restart?

rchan11
Explorer

Hi,

We've recently upgraded to a Splunk 6.2 indexer cluster, but we're finding that searches will hang and the system goes unresponsive. We're forced to restart the entire system. Our hardware doesn't spikes, but Memory consumption is high (as normal).

Is there any diagnostic we can run after we've successfully restarting the system to find out what the root cause was?

Thanks,
Ryan

0 Karma

jkat54
SplunkTrust
SplunkTrust

Please run the following searches to find your needle in the haystack:

index=_internal source=*splunkd.log WARN or ERR*

 -or maybe it wont be splunkd.log-

index=_internal source=* WARN or ERR*

You might also consider adding an _index_earliest=-1h to see events that have indexed in the last hour, etc. to help narrow down the search results to exactly when the issue occurred.

0 Karma

ppablo
Retired

Hi @rchan11

Just to clarify for other users, but are you referring to search head clustering or indexer clustering? or both?

0 Karma

rchan11
Explorer

Hi ppablo,

We have 1 search head and 2 clustered indexers.

Thanks
Ryan

0 Karma
Get Updates on the Splunk Community!

Enterprise Security Content Update (ESCU) | New Releases

In December, the Splunk Threat Research Team had 1 release of new security content via the Enterprise Security ...

Why am I not seeing the finding in Splunk Enterprise Security Analyst Queue?

(This is the first of a series of 2 blogs). Splunk Enterprise Security is a fantastic tool that offers robust ...

Index This | What are the 12 Days of Splunk-mas?

December 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...