Monitoring Splunk

Issue with alert running from DMC for memory overrun.

mookiie2005
Communicator

Hello,

We have an alert in place that uses the REST API to determine when a server is using to much memory and then the server is restarted.  It had been working great, however last week we had an alert come through that listed every box connected to the DMC.  This caused some restarts that created an issue.  When trying to look at the stats for the machines at the time of the alert, none of the servers show meeting that condition.  Am I doing something wrong here?

We are using the below search:

| rest splunk_server_group=dmc_group_* /services/server/status/resource-usage/hostwide
| eval percentage=round(mem_used/mem,3)*100
| where percentage > 90
| fields splunk_server, percentage, mem_used, mem
| rename splunk_server AS ServerName, mem AS "Physical memory installed (MB)", percentage AS "Memory used (%)", mem_used AS "Memory used (MB)"
| rex field=ServerName "\s*(?<ServerName>\w+[\d+]).*"
| table ServerName
| sort - ServerName
| stats list(ServerName) as ServerName delim=","
| nomv ServerName
Labels (1)
0 Karma
Get Updates on the Splunk Community!

Splunk Observability for AI

Don’t miss out on an exciting Tech Talk on Splunk Observability for AI! Discover how Splunk’s agentic AI ...

[Puzzles] Solve, Learn, Repeat: Dereferencing XML to Fixed-length events

This challenge was first posted on Slack #puzzles channelFor a previous puzzle, I needed a set of fixed-length ...

Stay Connected: Your Guide to December Tech Talks, Office Hours, and Webinars!

What are Community Office Hours? Community Office Hours is an interactive 60-minute Zoom series where ...