<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Load balance related alert in Alerting</title>
    <link>https://community.splunk.com/t5/Alerting/Load-balance-related-alert/m-p/334056#M5919</link>
    <description>&lt;P&gt;Currently we have 6 host sharing approx. 16.7-16.9% of load. When load is below 11% on a particular host an alert need to be triggered as well as when host is unavailable or not reachable. &lt;BR /&gt;
When using top limit and alert criteria of when number of results is &amp;lt; 6 I am receiving wrong alerts triggered.&lt;/P&gt;

&lt;P&gt;Need some guidance.&lt;/P&gt;</description>
    <pubDate>Mon, 11 Dec 2017 20:34:33 GMT</pubDate>
    <dc:creator>skarrupa</dc:creator>
    <dc:date>2017-12-11T20:34:33Z</dc:date>
    <item>
      <title>Load balance related alert</title>
      <link>https://community.splunk.com/t5/Alerting/Load-balance-related-alert/m-p/334056#M5919</link>
      <description>&lt;P&gt;Currently we have 6 host sharing approx. 16.7-16.9% of load. When load is below 11% on a particular host an alert need to be triggered as well as when host is unavailable or not reachable. &lt;BR /&gt;
When using top limit and alert criteria of when number of results is &amp;lt; 6 I am receiving wrong alerts triggered.&lt;/P&gt;

&lt;P&gt;Need some guidance.&lt;/P&gt;</description>
      <pubDate>Mon, 11 Dec 2017 20:34:33 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Alerting/Load-balance-related-alert/m-p/334056#M5919</guid>
      <dc:creator>skarrupa</dc:creator>
      <dc:date>2017-12-11T20:34:33Z</dc:date>
    </item>
    <item>
      <title>Re: Load balance related alert</title>
      <link>https://community.splunk.com/t5/Alerting/Load-balance-related-alert/m-p/334057#M5920</link>
      <description>&lt;P&gt;hello there,&lt;/P&gt;

&lt;P&gt;assuming count of events is the metric you are calculating,&lt;BR /&gt;
below is a search that answers your question:&lt;BR /&gt;
otherwise, you can use the same idea to capture the metric you are working with (maybe disk growth or other parameter)&lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;    | tstats count as event_count where index=* by splunk_server 
    | eventstats sum(event_count) as events
    | eval percent = round(event_count/events*100, 2)
&lt;/CODE&gt;&lt;/PRE&gt;

&lt;P&gt;now you can save as alert and alert if &amp;lt; 11 &lt;BR /&gt;
or add to search a where clause&lt;BR /&gt;
    &lt;CODE&gt;| where percent &amp;lt; 11&lt;/CODE&gt;&lt;BR /&gt;
i assume that by saying host, you refer to a splunk indexer.&lt;BR /&gt;
if that is true, there are plenty of ways to find out an indexer down.&lt;BR /&gt;
most likely, you will see it in a message, but if you want an alert, you can either capture the events on _internal index, or you can do something quick and dirty like:&lt;BR /&gt;
  &lt;CODE&gt;| tstats dc(splunk_server) as indexers_up&lt;/CODE&gt;&lt;BR /&gt;
 or&lt;BR /&gt;
&lt;CODE&gt;| tstats latest(_time) as last_seen bysplunk_server&lt;BR /&gt;
| eval last_seen = strftime(last_seen, "%c")&lt;/CODE&gt;&lt;BR /&gt;
if you have less then 6 in a given time period, you probably want to check, obviously, you can create a search that will tell you which one is "missing" but considering you have 6 indexers, finding it will be quick and easy&lt;/P&gt;

&lt;P&gt;hope it helps &lt;/P&gt;</description>
      <pubDate>Tue, 29 Sep 2020 17:12:42 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Alerting/Load-balance-related-alert/m-p/334057#M5920</guid>
      <dc:creator>adonio</dc:creator>
      <dc:date>2020-09-29T17:12:42Z</dc:date>
    </item>
  </channel>
</rss>

