How to set up an alert when an server goes down or not reporting logs to Splunk?

Path Finder

Some days back one of the servers went down but we don't know at that time, we get to know after some days, we don't have logs to check how it happens, so for this what is the query or solution or alerts for this case.

Tags (1)
0 Karma

Super Champion

Does the server send a heartbeat message? If you are looking for specific hosts then;

If yes, something of this logic. And put it as a savedsearch and run it every x minutes for alerting to your mail or Alerting System

index=someindex sourcetype=someheartBeat | stats count | eval AlertFlag=if(count > 0, "No","Yes")

Or you can based the logic on internal index every 30mins or so.

index=_internal sourcetype=splunkd source=*metrics.log host=someimportantHost| stats count | eval AlertFlag=if(count > 0, "No","Yes")





|metadata type=hosts index=_* index=*
|where now()-lastTime > 10

Finds the difference between now and last time the host reported an event to any index and alerts if the difference is greater than 10 seconds. You can adjust the threshold according to your requirement (in secs)

What goes around comes around. If it helps, hit it with Karma 🙂
0 Karma
Get Updates on the Splunk Community!

Enter the Splunk Community Dashboard Challenge for Your Chance to Win!

The Splunk Community Dashboard Challenge is underway! This is your chance to showcase your skills in creating ...

.conf24 | Session Scheduler is Live!!

.conf24 is happening June 11 - 14 in Las Vegas, and we are thrilled to announce that the conference catalog ...

Introducing the Splunk Community Dashboard Challenge!

Welcome to Splunk Community Dashboard Challenge! This is your chance to showcase your skills in creating ...