<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Monitor alerts (alarm if alerts do not work) in Getting Data In</title>
    <link>https://community.splunk.com/t5/Getting-Data-In/Monitor-alerts-alarm-if-alerts-do-not-work/m-p/471892#M81092</link>
    <description>&lt;P&gt;Hello together,&lt;/P&gt;

&lt;P&gt;i want to monitor existing alerts in splunk. For the case that an alarm doesn't work proper and doesn't find anything I want to get a notice or an alarm for that.&lt;/P&gt;

&lt;P&gt;I do not know how to do this far.&lt;/P&gt;

&lt;P&gt;Is there something in the internal index where splunk logs its alarms? Any suggestions?&lt;/P&gt;

&lt;P&gt;Thanks in advance. &lt;/P&gt;</description>
    <pubDate>Thu, 05 Sep 2019 06:26:20 GMT</pubDate>
    <dc:creator>igschloessl</dc:creator>
    <dc:date>2019-09-05T06:26:20Z</dc:date>
    <item>
      <title>Monitor alerts (alarm if alerts do not work)</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Monitor-alerts-alarm-if-alerts-do-not-work/m-p/471892#M81092</link>
      <description>&lt;P&gt;Hello together,&lt;/P&gt;

&lt;P&gt;i want to monitor existing alerts in splunk. For the case that an alarm doesn't work proper and doesn't find anything I want to get a notice or an alarm for that.&lt;/P&gt;

&lt;P&gt;I do not know how to do this far.&lt;/P&gt;

&lt;P&gt;Is there something in the internal index where splunk logs its alarms? Any suggestions?&lt;/P&gt;

&lt;P&gt;Thanks in advance. &lt;/P&gt;</description>
      <pubDate>Thu, 05 Sep 2019 06:26:20 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Monitor-alerts-alarm-if-alerts-do-not-work/m-p/471892#M81092</guid>
      <dc:creator>igschloessl</dc:creator>
      <dc:date>2019-09-05T06:26:20Z</dc:date>
    </item>
    <item>
      <title>Re: Monitor alerts (alarm if alerts do not work)</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Monitor-alerts-alarm-if-alerts-do-not-work/m-p/471893#M81093</link>
      <description>&lt;P&gt;As a starting point have a look at &lt;CODE&gt;index=_internal sourcetype=scheduler&lt;/CODE&gt; this will give you all scheduled search logs like, when it started, when its completed, eventcount, resultcount, timetaken etc.&lt;/P&gt;</description>
      <pubDate>Thu, 05 Sep 2019 08:24:59 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Monitor-alerts-alarm-if-alerts-do-not-work/m-p/471893#M81093</guid>
      <dc:creator>harsmarvania57</dc:creator>
      <dc:date>2019-09-05T08:24:59Z</dc:date>
    </item>
    <item>
      <title>Re: Monitor alerts (alarm if alerts do not work)</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Monitor-alerts-alarm-if-alerts-do-not-work/m-p/471894#M81094</link>
      <description>&lt;P&gt;Thank you. I tried to do a summary with appendcols&lt;/P&gt;

&lt;P&gt;index=_internal sourcetype=scheduler earliest=-7d@d latest=-0d@d &lt;BR /&gt;
| eval period="-7d"&lt;BR /&gt;
| stats count min(result_count) as min_result_count max(result_count) as max_result_count avg(result_count) as avg_result_count by savedsearch_name   app period&lt;BR /&gt;
| eval avg_result_count= round(avg_result_count,2)&lt;BR /&gt;
| table savedsearch_name app period min_result_count max_result_count avg_result_count count&lt;/P&gt;

&lt;P&gt;| appendcols [search index=_internal sourcetype=scheduler earliest=-14d@d latest=-7@d&lt;BR /&gt;
| eval period= "-14d"&lt;BR /&gt;
| stats count min(result_count) as min_result_count max(result_count) as max_result_count avg(result_count) as avg_result_count by savedsearch_name   app period&lt;BR /&gt;
| eval avg_result_count= round(avg_result_count,2)&lt;BR /&gt;
| table savedsearch_name app period min_result_count max_result_count avg_result_count count&lt;BR /&gt;
]&lt;BR /&gt;
| appendcols [search index=_internal sourcetype=scheduler  earliest=-21d@d latest=-14@d&lt;BR /&gt;
| eval period= "-21d"&lt;BR /&gt;
| stats count min(result_count) as min_result_count max(result_count) as max_result_count avg(result_count) as avg_result_count by savedsearch_name   app period&lt;BR /&gt;
| eval avg_result_count= round(avg_result_count,2)&lt;BR /&gt;
| table savedsearch_name app period min_result_count max_result_count avg_result_count count&lt;BR /&gt;
]&lt;BR /&gt;
But this does not work. And I dont know how to alarm if the counts are highly different from the weeks before.&lt;/P&gt;</description>
      <pubDate>Wed, 30 Sep 2020 02:04:39 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Monitor-alerts-alarm-if-alerts-do-not-work/m-p/471894#M81094</guid>
      <dc:creator>igschloessl</dc:creator>
      <dc:date>2020-09-30T02:04:39Z</dc:date>
    </item>
  </channel>
</rss>

