We saw a spike in the memory usage in one of the cluster search heads. This spike stayed for around 12 hours. When looking and comparing splunkd.log from all search heads, the impacted search head had something different. The warning in splunkd.log looks something like this:
Spent 10777ms reaping search artifacts in /opt/splunk/var/run/splunk/dispatch
Can anyone help me find out if the above would cause an excessive use of memory?
The message indicates that Splunk took 10.777 seconds removing expired search artifacts from the dispatch directory. I suspect that this warning message is more of a symptom than a cause. But it's hard to say with the information at hand.