Hi,
We've recently upgraded to a Splunk 6.2 indexer cluster, but we're finding that searches will hang and the system goes unresponsive. We're forced to restart the entire system. Our hardware doesn't spikes, but Memory consumption is high (as normal).
Is there any diagnostic we can run after we've successfully restarting the system to find out what the root cause was?
Thanks,
Ryan
Please run the following searches to find your needle in the haystack:
index=_internal source=*splunkd.log WARN or ERR*
-or maybe it wont be splunkd.log-
index=_internal source=* WARN or ERR*
You might also consider adding an _index_earliest=-1h to see events that have indexed in the last hour, etc. to help narrow down the search results to exactly when the issue occurred.
Hi @rchan11
Just to clarify for other users, but are you referring to search head clustering or indexer clustering? or both?
Hi ppablo,
We have 1 search head and 2 clustered indexers.
Thanks
Ryan