This is the expected behaviour. The fired alerts are search artifacts that persist in the var (temporal) folder. So whenSplunk is restarted those artifacts are deleted.
I would recommend you using this app, to improve the splunk default alerting system:
https://splunkbase.splunk.com/app/2665/
It stores the alerts as incidents in the KV store, so they will survive to restarts
Regards
This is the expected behaviour. The fired alerts are search artifacts that persist in the var (temporal) folder. So whenSplunk is restarted those artifacts are deleted.
I would recommend you using this app, to improve the splunk default alerting system:
https://splunkbase.splunk.com/app/2665/
It stores the alerts as incidents in the KV store, so they will survive to restarts
Regards
Thanks. I have Alert Manager; though it's in a testing phase right now. We'll look at that for a more permanent solution. Thanks again.
Did you ever figure this one out? We're using REST to populate a dashboard with fired alerts and noticed the same behavior.
Are you restarting the server as its getting hanged or due to heavy load issues.
And i hope, Unfortunately it would be yes as after restart your Dispatch/cache/buffer will get cleared as the OS.
Splunk service is being restart in order to read in new configs and install new apps. It doesn't seem smart that the alerts are stored on disk (kv store, summary index, etc.)