Splunk Search

After upgrade to a Splunk 6.2 indexer cluster, why do searches hang with high memory consumption and we're forced to restart?

rchan11
Explorer

Hi,

We've recently upgraded to a Splunk 6.2 indexer cluster, but we're finding that searches will hang and the system goes unresponsive. We're forced to restart the entire system. Our hardware doesn't spikes, but Memory consumption is high (as normal).

Is there any diagnostic we can run after we've successfully restarting the system to find out what the root cause was?

Thanks,
Ryan

0 Karma

jkat54
SplunkTrust
SplunkTrust

Please run the following searches to find your needle in the haystack:

index=_internal source=*splunkd.log WARN or ERR*

 -or maybe it wont be splunkd.log-

index=_internal source=* WARN or ERR*

You might also consider adding an _index_earliest=-1h to see events that have indexed in the last hour, etc. to help narrow down the search results to exactly when the issue occurred.

0 Karma

ppablo
Retired

Hi @rchan11

Just to clarify for other users, but are you referring to search head clustering or indexer clustering? or both?

0 Karma

rchan11
Explorer

Hi ppablo,

We have 1 search head and 2 clustered indexers.

Thanks
Ryan

0 Karma
Get Updates on the Splunk Community!

Introducing the Splunk Community Dashboard Challenge!

Welcome to Splunk Community Dashboard Challenge! This is your chance to showcase your skills in creating ...

Get the T-shirt to Prove You Survived Splunk University Bootcamp

As if Splunk University, in Las Vegas, in-person, with three days of bootcamps and labs weren’t enough, now ...

Wondering How to Build Resiliency in the Cloud?

IT leaders are choosing Splunk Cloud as an ideal cloud transformation platform to drive business resilience,  ...