Monitoring Splunk

Issue with alert running from DMC for memory overrun.

mookiie2005
Communicator

Hello,

We have an alert in place that uses the REST API to determine when a server is using to much memory and then the server is restarted.  It had been working great, however last week we had an alert come through that listed every box connected to the DMC.  This caused some restarts that created an issue.  When trying to look at the stats for the machines at the time of the alert, none of the servers show meeting that condition.  Am I doing something wrong here?

We are using the below search:

| rest splunk_server_group=dmc_group_* /services/server/status/resource-usage/hostwide
| eval percentage=round(mem_used/mem,3)*100
| where percentage > 90
| fields splunk_server, percentage, mem_used, mem
| rename splunk_server AS ServerName, mem AS "Physical memory installed (MB)", percentage AS "Memory used (%)", mem_used AS "Memory used (MB)"
| rex field=ServerName "\s*(?<ServerName>\w+[\d+]).*"
| table ServerName
| sort - ServerName
| stats list(ServerName) as ServerName delim=","
| nomv ServerName
Labels (1)
0 Karma
Get Updates on the Splunk Community!

Index This | Why did the turkey cross the road?

November 2025 Edition  Hayyy Splunk Education Enthusiasts and the Eternally Curious!   We’re back with this ...

Enter the Agentic Era with Splunk AI Assistant for SPL 1.4

  &#x1f680; Your data just got a serious AI upgrade — are you ready? Say hello to the Agentic Era with the ...

Feel the Splunk Love: Real Stories from Real Customers

Hello Splunk Community,    What’s the best part of hearing how our customers use Splunk? Easy: the positive ...