<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Alert when a Splunk service is down in Alerting</title>
    <link>https://community.splunk.com/t5/Alerting/Alert-when-a-Splunk-service-is-down/m-p/368413#M11459</link>
    <description>&lt;P&gt;Hi, &lt;/P&gt;

&lt;P&gt;Suppose we have 10 heavy forwarders and want to get alerted if any one of them goes down. &lt;BR /&gt;
How do we form an alert query. &lt;/P&gt;

&lt;P&gt;&lt;CODE&gt;index=_internal source=*splunkd.log*&lt;/CODE&gt; may work for for a single server, how to extend the query to work for multiple servers.&lt;/P&gt;

&lt;P&gt;If we use, &lt;BR /&gt;
 &lt;CODE&gt;index=_internal source=*splunkd.log* | stats count by host&lt;/CODE&gt; .. It may not work as host is down and won't be included in the results set. &lt;/P&gt;</description>
    <pubDate>Fri, 27 Apr 2018 00:13:23 GMT</pubDate>
    <dc:creator>nawazns5038</dc:creator>
    <dc:date>2018-04-27T00:13:23Z</dc:date>
    <item>
      <title>Alert when a Splunk service is down</title>
      <link>https://community.splunk.com/t5/Alerting/Alert-when-a-Splunk-service-is-down/m-p/368413#M11459</link>
      <description>&lt;P&gt;Hi, &lt;/P&gt;

&lt;P&gt;Suppose we have 10 heavy forwarders and want to get alerted if any one of them goes down. &lt;BR /&gt;
How do we form an alert query. &lt;/P&gt;

&lt;P&gt;&lt;CODE&gt;index=_internal source=*splunkd.log*&lt;/CODE&gt; may work for for a single server, how to extend the query to work for multiple servers.&lt;/P&gt;

&lt;P&gt;If we use, &lt;BR /&gt;
 &lt;CODE&gt;index=_internal source=*splunkd.log* | stats count by host&lt;/CODE&gt; .. It may not work as host is down and won't be included in the results set. &lt;/P&gt;</description>
      <pubDate>Fri, 27 Apr 2018 00:13:23 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Alerting/Alert-when-a-Splunk-service-is-down/m-p/368413#M11459</guid>
      <dc:creator>nawazns5038</dc:creator>
      <dc:date>2018-04-27T00:13:23Z</dc:date>
    </item>
    <item>
      <title>Re: Alert when a Splunk service is down</title>
      <link>https://community.splunk.com/t5/Alerting/Alert-when-a-Splunk-service-is-down/m-p/368414#M11460</link>
      <description>&lt;P&gt;You can search metadata and alert if forwarders do not report for more than a certain threshold&lt;BR /&gt;
&lt;PRE&gt;| metadata type=hosts | eval age = now() - lastTime | search age &amp;gt; 300 &lt;/PRE&gt;&lt;/P&gt;</description>
      <pubDate>Fri, 27 Apr 2018 02:18:00 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Alerting/Alert-when-a-Splunk-service-is-down/m-p/368414#M11460</guid>
      <dc:creator>pradeepkumarg</dc:creator>
      <dc:date>2018-04-27T02:18:00Z</dc:date>
    </item>
    <item>
      <title>Re: Alert when a Splunk service is down</title>
      <link>https://community.splunk.com/t5/Alerting/Alert-when-a-Splunk-service-is-down/m-p/368415#M11461</link>
      <description>&lt;P&gt;Add your heavy forwarders as search peers to your &lt;A href="http://docs.splunk.com/Documentation/Splunk/7.0.3/DMC/DMCoverview"&gt;Monitoring Console&lt;/A&gt; and enable the "DMC Alert - Search Peer Not Responding" alert.&lt;/P&gt;</description>
      <pubDate>Fri, 27 Apr 2018 03:22:47 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Alerting/Alert-when-a-Splunk-service-is-down/m-p/368415#M11461</guid>
      <dc:creator>Yorokobi</dc:creator>
      <dc:date>2018-04-27T03:22:47Z</dc:date>
    </item>
    <item>
      <title>Re: Alert when a Splunk service is down</title>
      <link>https://community.splunk.com/t5/Alerting/Alert-when-a-Splunk-service-is-down/m-p/368416#M11462</link>
      <description>&lt;P&gt;&lt;STRONG&gt;However, in environments with large numbers of values for each category, the data might not be complete. This is intentional and allows the metadata command to operate within reasonable time and memory usage.&lt;/STRONG&gt; ... from docs.&lt;/P&gt;

&lt;P&gt;I don't think metadata can produce accurate results, I don't see it working &lt;/P&gt;</description>
      <pubDate>Fri, 27 Apr 2018 23:09:18 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Alerting/Alert-when-a-Splunk-service-is-down/m-p/368416#M11462</guid>
      <dc:creator>nawazns5038</dc:creator>
      <dc:date>2018-04-27T23:09:18Z</dc:date>
    </item>
    <item>
      <title>Re: Alert when a Splunk service is down</title>
      <link>https://community.splunk.com/t5/Alerting/Alert-when-a-Splunk-service-is-down/m-p/368417#M11463</link>
      <description>&lt;P&gt;It checks only the indexers and that too only the management port (8089)&lt;/P&gt;</description>
      <pubDate>Fri, 27 Apr 2018 23:12:00 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Alerting/Alert-when-a-Splunk-service-is-down/m-p/368417#M11463</guid>
      <dc:creator>nawazns5038</dc:creator>
      <dc:date>2018-04-27T23:12:00Z</dc:date>
    </item>
    <item>
      <title>Re: Alert when a Splunk service is down</title>
      <link>https://community.splunk.com/t5/Alerting/Alert-when-a-Splunk-service-is-down/m-p/368418#M11464</link>
      <description>&lt;P&gt;@Yorokobi is right - if you add the HFs as search peers on your Monitoring console, the MC will contact them via port 8089 and you can use it's built-in alert to get a notification when one of them goes down. Actually works for all Splunk instances, be they indexers, search heads, HFs...&lt;/P&gt;</description>
      <pubDate>Mon, 30 Apr 2018 21:15:26 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Alerting/Alert-when-a-Splunk-service-is-down/m-p/368418#M11464</guid>
      <dc:creator>xpac</dc:creator>
      <dc:date>2018-04-30T21:15:26Z</dc:date>
    </item>
    <item>
      <title>Re: Alert when a Splunk service is down</title>
      <link>https://community.splunk.com/t5/Alerting/Alert-when-a-Splunk-service-is-down/m-p/368419#M11465</link>
      <description>&lt;P&gt;If you use what you have and add one more line, you have an instant alert.&lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;index=_internal source=*splunkd.log*
| stats dc(host) AS count values(host)
| where count &amp;lt; &amp;lt;known_number_of_hosts&amp;gt;
&lt;/CODE&gt;&lt;/PRE&gt;</description>
      <pubDate>Thu, 10 May 2018 00:03:20 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Alerting/Alert-when-a-Splunk-service-is-down/m-p/368419#M11465</guid>
      <dc:creator>woodcock</dc:creator>
      <dc:date>2018-05-10T00:03:20Z</dc:date>
    </item>
    <item>
      <title>Re: Alert when a Splunk service is down</title>
      <link>https://community.splunk.com/t5/Alerting/Alert-when-a-Splunk-service-is-down/m-p/368420#M11466</link>
      <description>&lt;P&gt;This needs one more stats or (just) dc(host) on existing one. Right now the count gives count of evwnts for host.&lt;/P&gt;</description>
      <pubDate>Thu, 10 May 2018 00:35:11 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Alerting/Alert-when-a-Splunk-service-is-down/m-p/368420#M11466</guid>
      <dc:creator>somesoni2</dc:creator>
      <dc:date>2018-05-10T00:35:11Z</dc:date>
    </item>
    <item>
      <title>Re: Alert when a Splunk service is down</title>
      <link>https://community.splunk.com/t5/Alerting/Alert-when-a-Splunk-service-is-down/m-p/368421#M11467</link>
      <description>&lt;P&gt;Correct, I really messed that up the first time.  Corrected now.&lt;/P&gt;</description>
      <pubDate>Thu, 10 May 2018 14:38:53 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Alerting/Alert-when-a-Splunk-service-is-down/m-p/368421#M11467</guid>
      <dc:creator>woodcock</dc:creator>
      <dc:date>2018-05-10T14:38:53Z</dc:date>
    </item>
    <item>
      <title>Re: Alert when a Splunk service is down</title>
      <link>https://community.splunk.com/t5/Alerting/Alert-when-a-Splunk-service-is-down/m-p/368422#M11468</link>
      <description>&lt;P&gt;Create a lookup with all the required hostnames and use it in the below query. &lt;/P&gt;

&lt;P&gt;index=_internal host=*hfwd* | stats count by host &lt;BR /&gt;
| append [ | inputlookup hfwd_hosts | table host ] | stats sum(count) as count by host | fillnull value=0 | where count =0 &lt;/P&gt;</description>
      <pubDate>Tue, 29 Sep 2020 21:04:50 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Alerting/Alert-when-a-Splunk-service-is-down/m-p/368422#M11468</guid>
      <dc:creator>nawazns5038</dc:creator>
      <dc:date>2020-09-29T21:04:50Z</dc:date>
    </item>
    <item>
      <title>Re: Alert when a Splunk service is down</title>
      <link>https://community.splunk.com/t5/Alerting/Alert-when-a-Splunk-service-is-down/m-p/368423#M11469</link>
      <description>&lt;P&gt;Also you can narrow down search | metadata type=hosts | search host= | eval age = now() - lastTime | search age &amp;gt; 300 &lt;/P&gt;

&lt;P&gt;OR &lt;/P&gt;

&lt;P&gt;| metadata type=hosts | search host=testweb* | eval age = now() - lastTime | search age &amp;gt; 300 &lt;/P&gt;</description>
      <pubDate>Thu, 21 Feb 2019 16:30:20 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Alerting/Alert-when-a-Splunk-service-is-down/m-p/368423#M11469</guid>
      <dc:creator>Krishnagrandhi</dc:creator>
      <dc:date>2019-02-21T16:30:20Z</dc:date>
    </item>
  </channel>
</rss>

