Monitoring Splunk

Issue with alert running from DMC for memory overrun.

mookiie2005
Communicator

Hello,

We have an alert in place that uses the REST API to determine when a server is using to much memory and then the server is restarted.  It had been working great, however last week we had an alert come through that listed every box connected to the DMC.  This caused some restarts that created an issue.  When trying to look at the stats for the machines at the time of the alert, none of the servers show meeting that condition.  Am I doing something wrong here?

We are using the below search:

| rest splunk_server_group=dmc_group_* /services/server/status/resource-usage/hostwide
| eval percentage=round(mem_used/mem,3)*100
| where percentage > 90
| fields splunk_server, percentage, mem_used, mem
| rename splunk_server AS ServerName, mem AS "Physical memory installed (MB)", percentage AS "Memory used (%)", mem_used AS "Memory used (MB)"
| rex field=ServerName "\s*(?<ServerName>\w+[\d+]).*"
| table ServerName
| sort - ServerName
| stats list(ServerName) as ServerName delim=","
| nomv ServerName
Labels (1)
0 Karma
Get Updates on the Splunk Community!

Maximize the Value from Microsoft Defender with Splunk

 Watch NowJoin Splunk and Sens Consulting for this Security Edition Tech TalkWho should attend:  Security ...

This Week's Community Digest - Splunk Community Happenings [6.27.22]

Get the latest news and updates from the Splunk Community here! News From Splunk Answers ✍️ Splunk Answers is ...

Reminder! Splunk Love Promo: $25 Visa Gift Card for Your Honest SOAR Review With ...

We recently launched our first Splunk Love Special, and it's gone phenomenally well, so we're doing it again, ...