Splunk Cloud Platform

CMC indicating high memory searches in splunk cloud

mchoudhary
Explorer

Lately, my CMC is indicating that there are 20 searches exceeding 10% of system memory. But when I click on it I don't see which searches are listed as high memory searches. All it gives is search id, app, memory used and status. (refer the image below)

mchoudhary_0-1741781095593.png

mchoudhary_1-1741781219028.png

Could anyone please suggest me how to troubleshoot this and find which searches are under High memory searches.

 

Labels (3)
0 Karma

livehybrid
SplunkTrust
SplunkTrust

Hi @mchoudhary 

Judging by the app which the search is running in it is likely that this is an accelerated datamodel generation search which is using a lot of memory. 

The easiest way to see what the search is would be to run something like this, updating the search_id value with the search ID of the search you wish to investigate. Note: The single quotes are required when I run this search due to how the search_id is parsed, so worth keeping those in.

index=_audit search_id='1741783414.663348' info=granted search=* 

This will give info on the search, the user, the app, the provenance (Which is usually the dashboard name if its run from within a dashboard) and a bunch of other info such as start/end times:

livehybrid_0-1741783601179.png

Please let me know how you get on and consider adding karma to this or any other answer if it has helped.
Regards

Will

0 Karma

kiran_panchavat
SplunkTrust
SplunkTrust

@mchoudhary 

 
The search ID (SID) is your key to finding the actual search. Splunk logs this information in the _audit or _internal indexes. 
 
Replace <your_search_id> with one of the SIDs from the CMC (e.g., search_id="1741781823.13254")
 
kiran_panchavat_1-1741782304428.png

 

  • Since you’re on Splunk Cloud, you don’t have direct access to limits.conf or server-level configs to set memory thresholds (e.g., search_process_memory_usage_threshold). If the issue persists after optimization or if the CMC isn’t giving enough detail, open a support ticket with Splunk. Provide:

The SIDs from the CMC.
The searches you identified.
A screenshot of the CMC panel (since you referenced an image).

They can check backend logs or adjust search concurrency/memory limits for you.

Did this help? If yes, please consider giving kudos, marking it as the solution, or commenting for clarification — your feedback keeps the community going!
0 Karma

kiran_panchavat
SplunkTrust
SplunkTrust

@mchoudhary 

kiran_panchavat_0-1741782019688.png

https://lantern.splunk.com/Splunk_Platform/Product_Tips/Administration/Using_the_Splunk_Cloud_Monito... 

Did this help? If yes, please consider giving kudos, marking it as the solution, or commenting for clarification — your feedback keeps the community going!
0 Karma

kiran_panchavat
SplunkTrust
SplunkTrust

@mchoudhary 

You can use the introspection search to find out the high memory consuming searches

index=_introspection sourcetype=splunk_resource_usage data.search_props.sid::* data.search_props.mode!=RT data.search_props.user!="splunk-system-user"
| eval process = 'data.process'
| eval args = 'data.args'
| eval pid = 'data.pid'
| eval ppid = 'data.ppid'
| eval elapsed = 'data.elapsed'
| eval mem_used = 'data.mem_used'
| eval mem = 'data.mem'
| eval pct_memory = 'data.pct_memory'
| eval pct_cpu = 'data.pct_cpu'
| eval sid = 'data.search_props.sid'
| eval app = 'data.search_props.app'
| eval label = 'data.search_props.label'
| eval type = 'data.search_props.type'
| eval mode = 'data.search_props.mode'
| eval user = 'data.search_props.user'
| eval role = 'data.search_props.role'
| eval label = if(isnotnull('data.search_props.label'), 'data.search_props.label', "")
| eval provenance = if(isnotnull('data.search_props.provenance'), 'data.search_props.provenance', "unknown")
| eval search_head = case(isnotnull('data.search_props.search_head') AND 'data.search_props.role' == "peer", 'data.search_props.search_head', isnull('data.search_props.search_head') AND 'data.search_props.role' == "head", "_self", isnull('data.search_props.search_head') AND 'data.search_props.role' == "peer", "_unknown")
| eval search_label = if('label'!="", 'label', 'sid')
| eval instance = if(isnotnull(dns_alt_name), dns_alt_name, host)
| stats max(elapsed) as runtime max(mem_used) as mem_used earliest(_time) as _time by search_label, provenance, type, mode, app, role, user, instance
| eval mem_used = round(mem_used, 2)
| sort 20 - mem_used, runtime
| eval runtime = tostring(round(runtime, 2), "duration")
| fields search_label, provenance, mem_used, instance, runtime, _time, type, mode, app, user, role
| eval _time=strftime(_time,"%+")
| rename search_label as Name, provenance as Provenance, mem_used as "Memory Usage (KB)", instance as Instance, runtime as "Search Duration", _time as Started, type as Type, mode as Mode, app as App, user as User, role as Role
| appendpipe
[ stats count
| eval Name="data unavailable"
| where count==0
| table Name ]

 

 

Did this help? If yes, please consider giving kudos, marking it as the solution, or commenting for clarification — your feedback keeps the community going!
0 Karma
Get Updates on the Splunk Community!

Building Reliable Asset and Identity Frameworks in Splunk ES

 Accurate asset and identity resolution is the backbone of security operations. Without it, alerts are ...

Cloud Monitoring Console - Unlocking Greater Visibility in SVC Usage Reporting

For Splunk Cloud customers, understanding and optimizing Splunk Virtual Compute (SVC) usage and resource ...

Automatic Discovery Part 3: Practical Use Cases

If you’ve enabled Automatic Discovery in your install of the Splunk Distribution of the OpenTelemetry ...