Alerting

alert for splunk stop and start

splunkn
Communicator

I am in need of the following requirement. Could anyone help me with the possible ideas?

I need to create an alert if splunk didn't start after 5 mins of stop.
ie If anyone stops the service and forgot to start in 5 mins, I want to trigger an alert.

I am able to find the stop and start logs in _audit logs as well as in splunkd.log. Which one will be reliable and how to calculate the gap and trigger an alert..

Thanks in advance

Tags (1)
0 Karma

grijhwani
Motivator

This is a broader, sysadmin/.service monitoring question. If Splunk is not running, then by definition it cannot create an alert, can it?

You don't specify what platform you are running on (Linux or Windows). If Linux, I would suggest you probably want to use an external service monitor (such as Nagios or Zabbix), which will continuously monitor according to criteria you set (in this case a TCP service listening on port 8089), and alert to different levels (e.g. warning when the service goes down, critical after 5 or more minutes of unavailability).

Got questions? Get answers!

Join the Splunk Community Slack to learn, troubleshoot, and make connections with fellow Splunk practitioners in real time!

Meet up IRL or virtually!

Join Splunk User Groups to connect and learn in-person by region or remotely by topic or industry.

Get Updates on the Splunk Community!

May 2026 Splunk Expert Sessions: Security & Observability

Level Up Your Operations: May 2026 Splunk Expert Sessions Whether you are refining your security posture or ...

Network to App: Observability Unlocked [May & June Series]

In today’s digital landscape, your environment is no longer confined to the data center. It spans complex ...

SPL2 Deep Dives, AppDynamics Integrations, SAML Made Simple and Much More on Splunk ...

Splunk Lantern is Splunk’s customer success center that provides practical guidance from Splunk experts on key ...