Splunk Search

What is a good query to monitor for someone sending too many alerts?


I received an email from ES techs that someone had sent over 128k alerts to the same address in a 24 hour period.
I tracked it down to two private alerts and disabled them.
Researching further those emailed alerts were just among those successfully sent. Because a lot of people did not get their alerts or scheduled reports for that day.

Here is a sampling of the error message from internals:
04-09-2019 09:15:07.768 -0500 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python /opt/splunk/etc/apps/search/bin/sendemail.py "results_link=http://TheSearchHeadServer:8000/app/ALL_my_sales/@go?sid=scheduler
karlA28BCL_ZGlnaXRhbF9zYWxlcwRMD559a15d8ba081a9e5_at_1554819300_52888" "ssname=Null Pointer" "graceful=True" "trigger_time=1554819306" results_file="/opt/splunk/var/run/splunk/dispatch/schedulerkarlA28BCL_ZGlnaXRhbF9zYWxlcw_RMDL559a15d8ba081a9e5_at_1554819300_52888/results.csv.gz"': ERROR:root:(452, '4.3.1 Insufficient system storage', u'SplunkSH@gmail.com') while sending mail to: karlA28BCL@gmail.com

0 Karma


Here is the query I have so far:
host=SplunkSH index=_internal "-0500 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python /opt/splunk/etc/apps/search/bin/sendemail.py" "Insufficient system storage'" "while sending mail to:"

0 Karma
.conf21 CFS Extended through 5/20!

Don't miss your chance
to share your Splunk
wisdom in-person or
virtually at .conf21!

Call for Speakers has
been extended through
Thursday, 5/20!