On a Cloud Splunk instance, I had created an Alert that would send out an email when a "real-time" search would find a error. This was first created in order to test the functionality of Splunk's Alerts system. After successfully receiving the emails, I deleted the alert from Search & Reporting > Alerts.
After the alert was deleted, I have continually been getting the alert email for the last 72 hours. I even tried changing the SMTP server settings, disabled the send email alerting functionality, and restarted the Splunk Cloud instance. I even removed the data source that was providing input data to Splunk, but I am still getting spammed. I called customer service, but they are not willing to help because I am a trial customer. Any ideas what the issue could be?
I had the same problem and figured out that eventhough my alert is no longer shown on GUI but it is still stored in a file.
Here is what I did.
Step 1. Look for a file called savedsearches.conf
In my mac, I run the following.
find . -name "savedsearches.conf"
Step 2. Look for the stanza that defines your alert.
It looks like the following.
[alert test] action.email.useNSSubject = 1 action.script = 1 action.script.filename = sendsms.sh ...
Step 3. Remove the stanza.
Step 4. Restart Splunk
I had the same issue. Mine turned out to be Postfix storing a huge number of emails in the mail queue due to a real-time alert that didn't have a throttle set. The server was sending out three emails at a time every 70 minutes, and had nearly 600 in the queue. The alert may be gone, but it has already triggered all of the emails.
Check the Postfix mail queue:
You will likely see a number of undeliverable emails sitting in the queue:
(delivery temporarily suspended: connect to (mailserver)[:4d0:a302:1100::151]:25: Network is unreachable)
Clear the entire queue including deferred emails:
postsuper -d ALL deferred
And that should fix it.
Same thing here. It's happening to me on a SplunkCloud instance free trial where I can't even delete the instance!! How do I delete the instance and start over?! I am getting the same e-mail every 4 minutes, I'd rather not have that for the length of my trial!!
you can search the "_audit" index where all searches are logged.
Search for "rt_scheduler_ohaquesearch_RMD5cxxxxxxxxxx_at_14546xxxxxx_80.9" and see when it was fired.
In addition you can go to $SPLUNK_HOME/var/run/splunk/dispatch and search for a directory with this name... every search will create a directory with the search bundle and the result of this search.
Have you already restarted you Splunk instance?
Under Activity > Jobs and Acitvity > Triggered Alerts I see nothing.
There are two links the the emails that I receive both of these link to error messages in splunk.
In handler 'savedsearch': Could not find object id=email errors xxxxxxx
Error in 'SearchOperator:loadjob': Cannot find job_id 'rt_scheduler_ohaquesearch_RMD5cxxxxxxxxxx_at_14546xxxxxx_80.9'.
First, make sure there's no job still running for that alert. Second, check if the emails you're still getting might just be old emails from some queue that's slowly being churned through.
Under Activity > Jobs I see no jobs running. Nothing Under Activity > Triggered Alerts either.
The alert condition for 'email errors from rbi15' was triggered.
Alert: email errors from rbi15
Clicking on emails errors from rbi15 takes me to splunk with this error "In handler 'savedsearch': Could not find object id=email errors from rbi15"
Clicking on View results takes me to splunk with this error " Error in 'SearchOperator:loadjob': Cannot find job_id 'rt_scheduler_ohaquesearch_RMD5c9fd4a4935b38451_at_1454692709_80.9'.
Run a search something like this over a time range long enough to cover the start of the alert search:
You should see a bunch of events around the start of the search, up until you deleted the alert. After that there shouldn't be any events with that search id in them, apart from possibly failed attempts to locate the id after you clicked the email link.
If your results are as I think they are, the search has indeed stopped long ago but a mail queue somewhere is still busy pushing, probably throttled heavily to not get spam-flagged.