Monitoring Splunk

Issue with alert running from DMC for memory overrun.



We have an alert in place that uses the REST API to determine when a server is using to much memory and then the server is restarted.  It had been working great, however last week we had an alert come through that listed every box connected to the DMC.  This caused some restarts that created an issue.  When trying to look at the stats for the machines at the time of the alert, none of the servers show meeting that condition.  Am I doing something wrong here?

We are using the below search:

| rest splunk_server_group=dmc_group_* /services/server/status/resource-usage/hostwide
| eval percentage=round(mem_used/mem,3)*100
| where percentage > 90
| fields splunk_server, percentage, mem_used, mem
| rename splunk_server AS ServerName, mem AS "Physical memory installed (MB)", percentage AS "Memory used (%)", mem_used AS "Memory used (MB)"
| rex field=ServerName "\s*(?<ServerName>\w+[\d+]).*"
| table ServerName
| sort - ServerName
| stats list(ServerName) as ServerName delim=","
| nomv ServerName
Labels (1)
0 Karma
.conf21 CFS Extended through 5/20!

Don't miss your chance
to share your Splunk
wisdom in-person or
virtually at .conf21!

Call for Speakers has
been extended through
Thursday, 5/20!