Alerting

Generic alert that I can apply to all hosts

snowmizer
Communicator

I would like to be able to create an alert that will notify us if Splunk either 1)stops getting log data from a host or 2)gets more the x number of errors in a specified period.

I know that I can write a search/alert for each host however I would like to have one search/alert that monitors all hosts and pulls out the one host that is having issues.

Is this possible?

Thanks.

Tags (1)
0 Karma

woodcock
Esteemed Legend

Yes, you can do something like this:

err* OR warn* OR fatal | stats count by host | where count > YourErrorThreshold

Then schedule the alert to run periodically over your evaluation timespan (e.g. every 5 minutes for the last 5 minutes) and set the alert to trigger on "number of events > 0"

0 Karma
Get Updates on the Splunk Community!

Splunk + ThousandEyes: Correlate frontend, app, and network data to troubleshoot ...

 Are you tired of troubleshooting delays caused by siloed frontend, application, and network data? We've got a ...

Splunk Observability for AI

Don’t miss out on an exciting Tech Talk on Splunk Observability for AI!Discover how Splunk’s agentic AI ...

🔐 Trust at Every Hop: How mTLS in Splunk Enterprise 10.0 Makes Security Simpler

From Idea to Implementation: Why Splunk Built mTLS into Splunk Enterprise 10.0  mTLS wasn’t just a checkbox ...