Our search head crashed, saying -
-- Apr 23 09:00:35 kernel: Out of memory: Kill process 2137 (splunkd) score 162 or sacrifice child
Two instances of the TA-check-point-app-for-splunk ran - one for 21 minutes consuming 21 GBs of memory and the other for 11 minutes, consuming 11 GBs.
How can we put safe-guards to prevent it?
The SH crashed because the OS killed it for using too much memory.
One solution is to add more memory to the SH, but that may be a short-term fix.
More important than knowing which apps were running at the time is knowing which searches were running at the time. Once you know that you can take steps to prevent future crashes, such as:
We decided to use the enable_memory_tracker
feature as in How can I generate a search which uses lots of memory?