<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: SLA Percentage Dashboard in Dashboards &amp; Visualizations</title>
    <link>https://community.splunk.com/t5/Dashboards-Visualizations/Help-with-calculating-the-success-and-failed-percentage-of-all/m-p/584201#M47884</link>
    <description>&lt;LI-CODE lang="markup"&gt;| gentimes start=-1 increment=10s
| eval Ip_Address="10.10.101.".((random()%20)+1)
| rename starttime as _time 
| fields _time Ip_Address
| bin _time span=5m
| stats values(_time) as time values(Ip_Address) as Ip_Address
| mvexpand Ip_Address
| mvexpand time
| rename time as _time
| eval log="Ping ".mvindex(split("success|failed","|"),floor((random()%6)/5))." for Ip_Address=".Ip_Address
| fields _time log
``` The lines above create some random data ```
``` Extract status and ip address from log entry (if you don't already have these) ```
| rex field=log "Ping (?&amp;lt;status&amp;gt;\w+) for Ip_Address=(?&amp;lt;ip_address&amp;gt;\d+\.\d+\.\d+\.\d+)"
``` Get total events for each ip address (this may already be known if you log for every ip address in every 5 minute slot and you have fixed time ranges) ```
| eventstats count as total by ip_address
``` Sort by ip address and time ```
| sort 0 ip_address _time
``` We are only interested in failures ```
| where status="failed" 
``` Find time difference between successive failures by ip address ```
| streamstats window=2 global=f range(_time) as time_difference by ip_address
``` We are only interested in continuous failures i.e. where time difference is 5 minutes (300 seconds) ```
| where time_difference=300
``` Count failures and keep total by ip address ```
| stats count values(total) as total by ip_address
``` Calculate percentage failures for ip address ```
| eval percentage=100*count/total&lt;/LI-CODE&gt;</description>
    <pubDate>Wed, 09 Feb 2022 07:06:51 GMT</pubDate>
    <dc:creator>ITWhisperer</dc:creator>
    <dc:date>2022-02-09T07:06:51Z</dc:date>
    <item>
      <title>Help with calculating the success and failed percentage of all IP's for an interval of time like 1 month?</title>
      <link>https://community.splunk.com/t5/Dashboards-Visualizations/Help-with-calculating-the-success-and-failed-percentage-of-all/m-p/584196#M47882</link>
      <description>&lt;P&gt;We are receiving a log from the host(host=abc) and we have one interesting field named Ip_Address.&lt;BR /&gt;In this field, we have multiple IP's and event is indexing for each 5 min of an interval like(Ping success for Ip_Address=10.10.101.10 OR Ping failed for Ip_Address=10.10.101.10).&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;FYI, if I am getting events like(1:00pm ping failed and 1:05pm ping success) in this case we are not considering as failed percentage.&lt;BR /&gt;So, basically if count of failure is more than one time(means Continuously like&amp;nbsp;1:00pm ping failed and 1:05pm ping failed ) then only it will be considered as failure.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;I am using below query to calculate the success and failed percentage of all ip's for an interval of time like 1 month or something but it is not fulfilling my requirement as I want to achieve for all ip's in a single query. It will be more useful if it shows in the dashboard visualization.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;PRE&gt;index=unix sourcetype=ping_log "Ping failed for Ip_Address=10.101.101.14"&lt;BR /&gt;(earliest="01/04/2022:07:00:00" latest="1/07/2022:18:00:00") OR (earliest="01/10/2022:07:00:00" latest="1/14/2022:18:00:00")OR (earliest="01/17/2022:07:00:00" latest="1/21/2022:18:00:00")OR (earliest="01/31/2022:07:00:00" latest="1/31/2022:18:00:00") &lt;BR /&gt;| timechart span=600s count &lt;BR /&gt;| where count=2 &lt;BR /&gt;| stats count &lt;BR /&gt;| eval failed_min=count*10 &lt;BR /&gt;| eval total=failed_min/9900*100,SLA=100-total,Ip_Address="10.101.101.14"&lt;BR /&gt;| rename SLA as Success_Percent &lt;BR /&gt;| table Success_Percent Ip_Address&lt;/PRE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 10 Feb 2022 00:29:16 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Dashboards-Visualizations/Help-with-calculating-the-success-and-failed-percentage-of-all/m-p/584196#M47882</guid>
      <dc:creator>jackin</dc:creator>
      <dc:date>2022-02-10T00:29:16Z</dc:date>
    </item>
    <item>
      <title>Re: SLA Percentage Dashboard</title>
      <link>https://community.splunk.com/t5/Dashboards-Visualizations/Help-with-calculating-the-success-and-failed-percentage-of-all/m-p/584201#M47884</link>
      <description>&lt;LI-CODE lang="markup"&gt;| gentimes start=-1 increment=10s
| eval Ip_Address="10.10.101.".((random()%20)+1)
| rename starttime as _time 
| fields _time Ip_Address
| bin _time span=5m
| stats values(_time) as time values(Ip_Address) as Ip_Address
| mvexpand Ip_Address
| mvexpand time
| rename time as _time
| eval log="Ping ".mvindex(split("success|failed","|"),floor((random()%6)/5))." for Ip_Address=".Ip_Address
| fields _time log
``` The lines above create some random data ```
``` Extract status and ip address from log entry (if you don't already have these) ```
| rex field=log "Ping (?&amp;lt;status&amp;gt;\w+) for Ip_Address=(?&amp;lt;ip_address&amp;gt;\d+\.\d+\.\d+\.\d+)"
``` Get total events for each ip address (this may already be known if you log for every ip address in every 5 minute slot and you have fixed time ranges) ```
| eventstats count as total by ip_address
``` Sort by ip address and time ```
| sort 0 ip_address _time
``` We are only interested in failures ```
| where status="failed" 
``` Find time difference between successive failures by ip address ```
| streamstats window=2 global=f range(_time) as time_difference by ip_address
``` We are only interested in continuous failures i.e. where time difference is 5 minutes (300 seconds) ```
| where time_difference=300
``` Count failures and keep total by ip address ```
| stats count values(total) as total by ip_address
``` Calculate percentage failures for ip address ```
| eval percentage=100*count/total&lt;/LI-CODE&gt;</description>
      <pubDate>Wed, 09 Feb 2022 07:06:51 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Dashboards-Visualizations/Help-with-calculating-the-success-and-failed-percentage-of-all/m-p/584201#M47884</guid>
      <dc:creator>ITWhisperer</dc:creator>
      <dc:date>2022-02-09T07:06:51Z</dc:date>
    </item>
    <item>
      <title>Re: SLA Percentage Dashboard</title>
      <link>https://community.splunk.com/t5/Dashboards-Visualizations/Help-with-calculating-the-success-and-failed-percentage-of-all/m-p/584214#M47887</link>
      <description>&lt;P&gt;ThankQ&amp;nbsp;&lt;a href="https://community.splunk.com/t5/user/viewprofilepage/user-id/225168"&gt;@ITWhisperer&lt;/a&gt;&amp;nbsp;&lt;BR /&gt;but&amp;nbsp; I do not want all IP address data. Only data need certain IP Addresses are required at the following timings&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;PRE&gt;(earliest="01/04/2022:07:00:00" latest="1/07/2022:18:00:00") OR (earliest="01/10/2022:07:00:00" latest="1/14/2022:18:00:00")OR (earliest="01/17/2022:07:00:00" latest="1/21/2022:18:00:00")OR (earliest="01/31/2022:07:00:00" latest="1/31/2022:18:00:00") &lt;/PRE&gt;</description>
      <pubDate>Wed, 09 Feb 2022 08:12:32 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Dashboards-Visualizations/Help-with-calculating-the-success-and-failed-percentage-of-all/m-p/584214#M47887</guid>
      <dc:creator>jackin</dc:creator>
      <dc:date>2022-02-09T08:12:32Z</dc:date>
    </item>
    <item>
      <title>Re: SLA Percentage Dashboard</title>
      <link>https://community.splunk.com/t5/Dashboards-Visualizations/Help-with-calculating-the-success-and-failed-percentage-of-all/m-p/584216#M47888</link>
      <description>&lt;P&gt;That was a runanywhere example - replace the top part with your search&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;index=unix sourcetype=ping_log " for Ip_Address=10.101.101.14"
(earliest="01/04/2022:07:00:00" latest="1/07/2022:18:00:00") OR (earliest="01/10/2022:07:00:00" latest="1/14/2022:18:00:00")OR (earliest="01/17/2022:07:00:00" latest="1/21/2022:18:00:00")OR (earliest="01/31/2022:07:00:00" latest="1/31/2022:18:00:00") 
``` Extract status and ip address from log entry (if you don't already have these) ```
| rex field=_raw "Ping (?&amp;lt;status&amp;gt;\w+) for Ip_Address=(?&amp;lt;ip_address&amp;gt;\d+\.\d+\.\d+\.\d+)"
``` Get total events for each ip address (this may already be known if you log for every ip address in every 5 minute slot and you have fixed time ranges) ```
| eventstats count as total by ip_address
``` Sort by ip address and time ```
| sort 0 ip_address _time
``` We are only interested in failures ```
| where status="failed" 
``` Find time difference between successive failures by ip address ```
| streamstats window=2 global=f range(_time) as time_difference by ip_address
``` We are only interested in continuous failures i.e. where time difference is 5 minutes (300 seconds) ```
| where time_difference=300
``` Count failures and keep total by ip address ```
| stats count values(total) as total by ip_address
``` Calculate percentage failures for ip address ```
| eval percentage=100*count/total&lt;/LI-CODE&gt;</description>
      <pubDate>Wed, 09 Feb 2022 08:24:31 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Dashboards-Visualizations/Help-with-calculating-the-success-and-failed-percentage-of-all/m-p/584216#M47888</guid>
      <dc:creator>ITWhisperer</dc:creator>
      <dc:date>2022-02-09T08:24:31Z</dc:date>
    </item>
    <item>
      <title>Re: SLA Percentage Dashboard</title>
      <link>https://community.splunk.com/t5/Dashboards-Visualizations/Help-with-calculating-the-success-and-failed-percentage-of-all/m-p/584226#M47893</link>
      <description>&lt;P&gt;&lt;a href="https://community.splunk.com/t5/user/viewprofilepage/user-id/225168"&gt;@ITWhisperer&lt;/a&gt;&amp;nbsp; ThankQ&lt;BR /&gt;&lt;BR /&gt;Thanks for replying, But this query did not get the output we excepted, we need failed and success percentage within the time to mentioned Ip's in our CSV file. and I hope you understanding the how I considering the failed percentage.&lt;BR /&gt;&lt;BR /&gt;final output like&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;&lt;TABLE width="202"&gt;&lt;TBODY&gt;&lt;TR&gt;&lt;TD width="74"&gt;IP_Address&lt;/TD&gt;&lt;TD width="64"&gt;Failed%&lt;/TD&gt;&lt;TD width="64"&gt;Success%&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TD&gt;1.1.1.1&lt;/TD&gt;&lt;TD&gt;0.5&lt;/TD&gt;&lt;TD&gt;99.5&lt;/TD&gt;&lt;/TR&gt;&lt;/TBODY&gt;&lt;/TABLE&gt;</description>
      <pubDate>Wed, 09 Feb 2022 09:06:01 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Dashboards-Visualizations/Help-with-calculating-the-success-and-failed-percentage-of-all/m-p/584226#M47893</guid>
      <dc:creator>jackin</dc:creator>
      <dc:date>2022-02-09T09:06:01Z</dc:date>
    </item>
    <item>
      <title>Re: SLA Percentage Dashboard</title>
      <link>https://community.splunk.com/t5/Dashboards-Visualizations/Help-with-calculating-the-success-and-failed-percentage-of-all/m-p/584277#M47899</link>
      <description>&lt;LI-CODE lang="markup"&gt;| eval failed=100*count/total
| eval success=100-failed&lt;/LI-CODE&gt;</description>
      <pubDate>Wed, 09 Feb 2022 13:11:32 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Dashboards-Visualizations/Help-with-calculating-the-success-and-failed-percentage-of-all/m-p/584277#M47899</guid>
      <dc:creator>ITWhisperer</dc:creator>
      <dc:date>2022-02-09T13:11:32Z</dc:date>
    </item>
    <item>
      <title>Re: SLA Percentage Dashboard</title>
      <link>https://community.splunk.com/t5/Dashboards-Visualizations/Help-with-calculating-the-success-and-failed-percentage-of-all/m-p/584288#M47901</link>
      <description>&lt;P&gt;Hi&lt;a href="https://community.splunk.com/t5/user/viewprofilepage/user-id/225168"&gt;@ITWhisperer&lt;/a&gt;&amp;nbsp;&amp;nbsp;&lt;BR /&gt;i am using below query as you suggested it gives the results as expected&lt;BR /&gt;here I forgot to mention 2 logics in above query&lt;BR /&gt;1. output comes only few hosts not for all(means all ip address which i have mentioned CSV file)&lt;/P&gt;&lt;P&gt;2.I need the data in specific times to bring in output like(from January 1st to January 17th and January 31st weekdays from 7am to 6pm)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;PRE&gt;&lt;BR /&gt;index=os sourcetype=ping_log &lt;BR /&gt;[ inputlookup Ping.csv]&lt;BR /&gt;(earliest="01/04/2022:07:00:00" latest="1/07/2022:18:00:00") OR (earliest="01/10/2022:07:00:00" latest="1/14/2022:18:00:00")OR (earliest="01/17/2022:07:00:00" latest="1/21/2022:18:00:00")OR (earliest="01/31/2022:07:00:00" latest="1/31/2022:18:00:00") &lt;BR /&gt;| rex field=_raw "Ping (?&amp;lt;status&amp;gt;\w+) for Ip_Address=(?&amp;lt;ip_address&amp;gt;\d+\.\d+\.\d+\.\d+)" &lt;BR /&gt;| eventstats count as total by ip_address &lt;BR /&gt;| sort 0 ip_address _time &lt;BR /&gt;| where status="failed" &lt;BR /&gt;| streamstats window=2 global=f range(_time) as time_difference by ip_address &lt;BR /&gt;| where time_difference=300 &lt;BR /&gt;| stats count values(total) as total by ip_address &lt;BR /&gt;| eval failed=100*count/total&lt;BR /&gt;| eval success=100-failed&lt;/PRE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 09 Feb 2022 13:58:03 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Dashboards-Visualizations/Help-with-calculating-the-success-and-failed-percentage-of-all/m-p/584288#M47901</guid>
      <dc:creator>jackin</dc:creator>
      <dc:date>2022-02-09T13:58:03Z</dc:date>
    </item>
  </channel>
</rss>

