<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Splunk Alerts &amp; Dashboard Panels in Splunk Search</title>
    <link>https://community.splunk.com/t5/Splunk-Search/Splunk-Alerts-Dashboard-Panels/m-p/317937#M95129</link>
    <description>&lt;P&gt;If you're truly worried about running over the raw data twice, you can define an accelerated data model on your data and power your dashboards and alerts off that.&lt;/P&gt;

&lt;P&gt;In reality, you won't usually see big benefits just from two consuming searches though... especially considering your metrics will have under 3k events per host per day, and you're only looking at ten minutes - twenty events per host.&lt;/P&gt;</description>
    <pubDate>Wed, 12 Apr 2017 05:42:39 GMT</pubDate>
    <dc:creator>martin_mueller</dc:creator>
    <dc:date>2017-04-12T05:42:39Z</dc:date>
    <item>
      <title>Splunk Alerts &amp; Dashboard Panels</title>
      <link>https://community.splunk.com/t5/Splunk-Search/Splunk-Alerts-Dashboard-Panels/m-p/317933#M95125</link>
      <description>&lt;P&gt;Hi All,&lt;/P&gt;

&lt;P&gt;I have configured an alert to trigger based on when a the tcpout queue size breaches 80% - as per the SPL below:&lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;earliest=-10m  index=_internal host=*hfw* source=*metrics.log group=queue name=tcpout* 
| eval queuecapacity_percent=round((current_size/max_size)*100,2), eval current_size_mb=round((current_size/1024/1024),2), eval max_size_mb=round((max_size/1024/1024),2) 
| where queuecapacity_percent &amp;gt;= 80 
| fields host, index, name, group, current_size, largest_size, max_size, date_year, date_month, date_mday, date_hour, date_minute, date_second, queuecapacity_percent, current_size_mb, max_size_mb
&lt;/CODE&gt;&lt;/PRE&gt;

&lt;P&gt;In addition to this, I'd like to have a Single Value panel on a dashboard, which will display the current TCP Output queue size - which I have written so far with:&lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;earliest=-10m  index=_internal host=*hfw* source=*metrics.log group=queue name=tcpout* | eval queuecapacity_percent=round((current_size/max_size)*100,2) | eval current_size_mb=round((current_size/1024/1024),2) | eval max_size_mb=round((max_size/1024/1024),2)  | timechart span=1m max(queuecapacity_percent)
&lt;/CODE&gt;&lt;/PRE&gt;

&lt;P&gt;My Question is: What are the best practices when creating alerts, and dashboard panels?&lt;BR /&gt;
These two objects are looking at the same data, however I want an alert &amp;amp; email to trigger when the threshold breaches 80%.&lt;/P&gt;

&lt;P&gt;Is it common to have a search dedicated to the alert, and one dedicated to the dashboard panel?&lt;BR /&gt;
How can I combine these into a single splunk object to save on performance?&lt;BR /&gt;
Is there are better way to approach this?&lt;/P&gt;

&lt;P&gt;Any help is greatly appreciated!&lt;BR /&gt;
Apologies if this question has been asked before.&lt;/P&gt;

&lt;P&gt;Thanks,&lt;/P&gt;

&lt;P&gt;Craig&lt;/P&gt;</description>
      <pubDate>Tue, 11 Apr 2017 05:50:07 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/Splunk-Alerts-Dashboard-Panels/m-p/317933#M95125</guid>
      <dc:creator>craigwilkinson</dc:creator>
      <dc:date>2017-04-11T05:50:07Z</dc:date>
    </item>
    <item>
      <title>Re: Splunk Alerts &amp; Dashboard Panels</title>
      <link>https://community.splunk.com/t5/Splunk-Search/Splunk-Alerts-Dashboard-Panels/m-p/317934#M95126</link>
      <description>&lt;P&gt;@craigwilkinson... Ideally Dashboards are created so that Users can get insight to what is happening in their system at any given point of time... Historical, Real Time or Predictive. If they are setup on a monitoring screen they might either run a real-time search (based on Splunk infrastructure) or else refresh periodically.&lt;/P&gt;

&lt;P&gt;Once SLA missed or KPI breach situation triggers an alert like Queue capacity above 80% etc, Users can resort to such dashboards for correlating and investigating further (even without having knowledge of Splunk or Underlying data). For situations like How much Data is getting Indexed, what is the CPU/Memory on Splunk server, License volume being utilized, any spike in specific sourcetype ingesting more data etc. Even though dashboard  might show situations like KPI above 80%, unless alerted/documented with proper alert action they might get missed or unnoticed.&lt;/P&gt;

&lt;P&gt;Having said these, &lt;BR /&gt;
1) You can schedule a dashboard for periodic pdf delivery in email (provided you have email exchange setup and your dashboard does not have interactive form elements). &lt;BR /&gt;
2) You can also check out &lt;STRONG&gt;sendemail&lt;/STRONG&gt; Splunk command to send out email if specific condition is met.&lt;/P&gt;</description>
      <pubDate>Tue, 11 Apr 2017 07:26:26 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/Splunk-Alerts-Dashboard-Panels/m-p/317934#M95126</guid>
      <dc:creator>niketn</dc:creator>
      <dc:date>2017-04-11T07:26:26Z</dc:date>
    </item>
    <item>
      <title>Re: Splunk Alerts &amp; Dashboard Panels</title>
      <link>https://community.splunk.com/t5/Splunk-Search/Splunk-Alerts-Dashboard-Panels/m-p/317935#M95127</link>
      <description>&lt;P&gt;For the search itself&lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;index=_internal host=*hfw* source=*metrics.log group=queue name=tcpout*
&lt;/CODE&gt;&lt;/PRE&gt;

&lt;P&gt;you'd wrap that in an eventtype with tags for re-use across multiple searches. Your &lt;CODE&gt;eval&lt;/CODE&gt; calls would be best stored in a calculated field for that sourcetype, then you won't have to add them to every search. All that remains (&lt;CODE&gt;where&lt;/CODE&gt; or &lt;CODE&gt;timechart&lt;/CODE&gt;) is specific to the alert or dashboard, no need to change that.&lt;/P&gt;</description>
      <pubDate>Tue, 11 Apr 2017 08:02:10 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/Splunk-Alerts-Dashboard-Panels/m-p/317935#M95127</guid>
      <dc:creator>martin_mueller</dc:creator>
      <dc:date>2017-04-11T08:02:10Z</dc:date>
    </item>
    <item>
      <title>Re: Splunk Alerts &amp; Dashboard Panels</title>
      <link>https://community.splunk.com/t5/Splunk-Search/Splunk-Alerts-Dashboard-Panels/m-p/317936#M95128</link>
      <description>&lt;P&gt;Ahh ok. So just to clarify, it &lt;STRONG&gt;is&lt;/STRONG&gt; common to the have 2 separate scheduled searches running:&lt;BR /&gt;
1st - to display the current queue size metric on a dashboard panel (and update as specified - using timechart)&lt;BR /&gt;
2nd - to send an alert when the capacity exceeds 80%. (using "where")&lt;/P&gt;

&lt;P&gt;It kind of seems inefficient to me - having two searches for essentially the same output.&lt;/P&gt;

&lt;P&gt;The alerts are to be used for OOB hours, and when staff are away from their desks - where as the dashboard is to display pretty metrics during the office for management etc.&lt;/P&gt;</description>
      <pubDate>Tue, 11 Apr 2017 23:45:02 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/Splunk-Alerts-Dashboard-Panels/m-p/317936#M95128</guid>
      <dc:creator>craigwilkinson</dc:creator>
      <dc:date>2017-04-11T23:45:02Z</dc:date>
    </item>
    <item>
      <title>Re: Splunk Alerts &amp; Dashboard Panels</title>
      <link>https://community.splunk.com/t5/Splunk-Search/Splunk-Alerts-Dashboard-Panels/m-p/317937#M95129</link>
      <description>&lt;P&gt;If you're truly worried about running over the raw data twice, you can define an accelerated data model on your data and power your dashboards and alerts off that.&lt;/P&gt;

&lt;P&gt;In reality, you won't usually see big benefits just from two consuming searches though... especially considering your metrics will have under 3k events per host per day, and you're only looking at ten minutes - twenty events per host.&lt;/P&gt;</description>
      <pubDate>Wed, 12 Apr 2017 05:42:39 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/Splunk-Alerts-Dashboard-Panels/m-p/317937#M95129</guid>
      <dc:creator>martin_mueller</dc:creator>
      <dc:date>2017-04-12T05:42:39Z</dc:date>
    </item>
  </channel>
</rss>

