I am having problems with high memory consumption in my search head.
In some periods where they execute alerts already programmed, the consumption of memory RAM reaches incredible 377GB. This causes the system to terminate the Splunk process.
Is there any search I run to find out the memory consumption of each splunk alert?
So I can identify which alert is causing this high consumption.
`comment("As originally found on https://answers.splunk.com/answers/500973/how-to-improve-my-search-to-identify-queries-which.html / DalJeanis with minor modifications. Max memory used per search process at search head level")` index=_introspection sourcetype=splunk_resource_usage component=PerProcess data.search_props.sid=* | stats max(data.mem_used) AS peak_mem_usage, latest(data.search_props.mode) AS mode, latest(data.search_props.type) AS type, latest(data.search_props.role) AS role, latest(data.search_props.app) AS app, latest(data.search_props.user) AS user, latest(data.search_props.provenance) AS provenance, latest(data.search_props.label) AS label, latest(host) AS splunk_server, min(_time) AS min_time, max(_time) AS max_time by data.search_props.sid, host | sort - peak_mem_usage | head 50 | table provenance, peak_mem_usage, label, mode, type, role, app, user, min_time, max_time, data.search_props.sid splunk_server | eval min_time=strftime(min_time, "%+"), max_time=strftime(max_time, "%+") | rename data.search_props.sid AS sid, peak_mem_usage AS "Peak Physical Memory Usage (MB)", min_time AS "First time seen", max_time AS "Last time seen"
You might want to narrow down to your search heads, with a host=
The monitoring console also has a view of searches using a lot of memory
Seems like your instance might have really bad searches and most of them might be executing near same time. You can setup monitoring console and figure out what may be causing it.