Splunk Search

After upgrade to a Splunk 6.2 indexer cluster, why do searches hang with high memory consumption and we're forced to restart?

rchan11
Explorer

Hi,

We've recently upgraded to a Splunk 6.2 indexer cluster, but we're finding that searches will hang and the system goes unresponsive. We're forced to restart the entire system. Our hardware doesn't spikes, but Memory consumption is high (as normal).

Is there any diagnostic we can run after we've successfully restarting the system to find out what the root cause was?

Thanks,
Ryan

0 Karma

jkat54
SplunkTrust
SplunkTrust

Please run the following searches to find your needle in the haystack:

index=_internal source=*splunkd.log WARN or ERR*

 -or maybe it wont be splunkd.log-

index=_internal source=* WARN or ERR*

You might also consider adding an _index_earliest=-1h to see events that have indexed in the last hour, etc. to help narrow down the search results to exactly when the issue occurred.

0 Karma

ppablo
Retired

Hi @rchan11

Just to clarify for other users, but are you referring to search head clustering or indexer clustering? or both?

0 Karma

rchan11
Explorer

Hi ppablo,

We have 1 search head and 2 clustered indexers.

Thanks
Ryan

0 Karma
Get Updates on the Splunk Community!

Splunk Observability for AI

Don’t miss out on an exciting Tech Talk on Splunk Observability for AI! Discover how Splunk’s agentic AI ...

[Puzzles] Solve, Learn, Repeat: Dereferencing XML to Fixed-length events

This challenge was first posted on Slack #puzzles channelFor a previous puzzle, I needed a set of fixed-length ...

Stay Connected: Your Guide to December Tech Talks, Office Hours, and Webinars!

What are Community Office Hours? Community Office Hours is an interactive 60-minute Zoom series where ...