<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Optimizing Tweaks For Slow Queries in Splunk Search</title>
    <link>https://community.splunk.com/t5/Splunk-Search/Optimizing-Tweaks-For-Slow-Queries/m-p/392060#M114110</link>
    <description>&lt;P&gt;Tried this  &lt;CODE&gt;| inputcsv TEMP [search  index="AAA" earliest=last_executed_time  latest=now  | stats count(abc) as xyz1 | eval total = xyz+xyz1]&lt;/CODE&gt;&lt;/P&gt;</description>
    <pubDate>Thu, 18 Jul 2019 16:42:38 GMT</pubDate>
    <dc:creator>reverse</dc:creator>
    <dc:date>2019-07-18T16:42:38Z</dc:date>
    <item>
      <title>Optimizing Tweaks For Slow Queries</title>
      <link>https://community.splunk.com/t5/Splunk-Search/Optimizing-Tweaks-For-Slow-Queries/m-p/392058#M114108</link>
      <description>&lt;P&gt;I have a simple query &lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;| stats count(abc) as xyz
&lt;/CODE&gt;&lt;/PRE&gt;

&lt;P&gt;Now since it is taking too much time- i decided to tweak it a bit ..&lt;BR /&gt;
For the time it has run already i am saving it to a CSV with 2 values...&lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;last_executed_time and xyz 
&lt;/CODE&gt;&lt;/PRE&gt;

&lt;P&gt;Lets say outputcsv ran @10am.&lt;BR /&gt;
Now it is 10:30 am.. I want to take data from csv till 10 am and since 10am till now .. add the xyzs ...&lt;BR /&gt;
Please help..&lt;/P&gt;</description>
      <pubDate>Thu, 18 Jul 2019 16:35:44 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/Optimizing-Tweaks-For-Slow-Queries/m-p/392058#M114108</guid>
      <dc:creator>reverse</dc:creator>
      <dc:date>2019-07-18T16:35:44Z</dc:date>
    </item>
    <item>
      <title>Re: Optimizing Tweaks For Slow Queries</title>
      <link>https://community.splunk.com/t5/Splunk-Search/Optimizing-Tweaks-For-Slow-Queries/m-p/392059#M114109</link>
      <description>&lt;PRE&gt;&lt;CODE&gt;idea is to get last_executed_time  and assign as earliest to the joined query to get correct value of xyz
&lt;/CODE&gt;&lt;/PRE&gt;</description>
      <pubDate>Thu, 18 Jul 2019 16:37:26 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/Optimizing-Tweaks-For-Slow-Queries/m-p/392059#M114109</guid>
      <dc:creator>reverse</dc:creator>
      <dc:date>2019-07-18T16:37:26Z</dc:date>
    </item>
    <item>
      <title>Re: Optimizing Tweaks For Slow Queries</title>
      <link>https://community.splunk.com/t5/Splunk-Search/Optimizing-Tweaks-For-Slow-Queries/m-p/392060#M114110</link>
      <description>&lt;P&gt;Tried this  &lt;CODE&gt;| inputcsv TEMP [search  index="AAA" earliest=last_executed_time  latest=now  | stats count(abc) as xyz1 | eval total = xyz+xyz1]&lt;/CODE&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 18 Jul 2019 16:42:38 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/Optimizing-Tweaks-For-Slow-Queries/m-p/392060#M114110</guid>
      <dc:creator>reverse</dc:creator>
      <dc:date>2019-07-18T16:42:38Z</dc:date>
    </item>
    <item>
      <title>Re: Optimizing Tweaks For Slow Queries</title>
      <link>https://community.splunk.com/t5/Splunk-Search/Optimizing-Tweaks-For-Slow-Queries/m-p/392061#M114111</link>
      <description>&lt;P&gt;didnt work&lt;/P&gt;</description>
      <pubDate>Thu, 18 Jul 2019 16:45:30 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/Optimizing-Tweaks-For-Slow-Queries/m-p/392061#M114111</guid>
      <dc:creator>reverse</dc:creator>
      <dc:date>2019-07-18T16:45:30Z</dc:date>
    </item>
    <item>
      <title>Re: Optimizing Tweaks For Slow Queries</title>
      <link>https://community.splunk.com/t5/Splunk-Search/Optimizing-Tweaks-For-Slow-Queries/m-p/392062#M114112</link>
      <description>&lt;P&gt;What are you trying to figure out? &lt;CODE&gt;| stats count&lt;/CODE&gt; is not going to be particularly efficient as it's a transforming command and must run on the search head and so is relatively inefficient as all the data has to come back to that single search head (especially if you have a ton of data and a lot of indexers). You might be better off looking into the tstats command, which will be orders of magnitude faster than stats (it looks at the index metadata (tsidx files), not the actual data).&lt;/P&gt;

&lt;P&gt;See &lt;A href="https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/tstats"&gt;https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/tstats&lt;/A&gt; for info on tstats and how to format that query.&lt;/P&gt;

&lt;P&gt;Refer to &lt;A href="https://docs.splunk.com/Documentation/Splunk/latest/Search/Typesofcommands"&gt;https://docs.splunk.com/Documentation/Splunk/latest/Search/Typesofcommands&lt;/A&gt; for more info on types of commands and where they run.&lt;/P&gt;</description>
      <pubDate>Thu, 18 Jul 2019 16:54:03 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/Optimizing-Tweaks-For-Slow-Queries/m-p/392062#M114112</guid>
      <dc:creator>vliggio</dc:creator>
      <dc:date>2019-07-18T16:54:03Z</dc:date>
    </item>
    <item>
      <title>Re: Optimizing Tweaks For Slow Queries</title>
      <link>https://community.splunk.com/t5/Splunk-Search/Optimizing-Tweaks-For-Slow-Queries/m-p/392063#M114113</link>
      <description>&lt;P&gt;better use summary indexing for this use case&lt;/P&gt;</description>
      <pubDate>Thu, 18 Jul 2019 16:54:24 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/Optimizing-Tweaks-For-Slow-Queries/m-p/392063#M114113</guid>
      <dc:creator>adonio</dc:creator>
      <dc:date>2019-07-18T16:54:24Z</dc:date>
    </item>
    <item>
      <title>Re: Optimizing Tweaks For Slow Queries</title>
      <link>https://community.splunk.com/t5/Splunk-Search/Optimizing-Tweaks-For-Slow-Queries/m-p/392064#M114114</link>
      <description>&lt;P&gt;true but I wanted to try abovementioned behaviour .. wanted to know why query is not executing &lt;/P&gt;

&lt;P&gt;@Vijeta Could you please look into this ?&lt;/P&gt;</description>
      <pubDate>Thu, 18 Jul 2019 17:26:27 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/Optimizing-Tweaks-For-Slow-Queries/m-p/392064#M114114</guid>
      <dc:creator>reverse</dc:creator>
      <dc:date>2019-07-18T17:26:27Z</dc:date>
    </item>
    <item>
      <title>Re: Optimizing Tweaks For Slow Queries</title>
      <link>https://community.splunk.com/t5/Splunk-Search/Optimizing-Tweaks-For-Slow-Queries/m-p/392065#M114115</link>
      <description>&lt;P&gt;@reverse Try this,&lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt; index=&amp;lt;your index&amp;gt; [|inputlookup TEMP| eval earliest=strptime(last_executed_time,"%m/%d/%Y %H:%M:%S")| return earliest]| stats count(abc) as xyz | append[|inputlookup TEMP]| stats sum(xyz) as total
&lt;/CODE&gt;&lt;/PRE&gt;

&lt;P&gt;Please note the time format you convert from should be same in your lookup table.&lt;/P&gt;</description>
      <pubDate>Thu, 18 Jul 2019 17:51:38 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/Optimizing-Tweaks-For-Slow-Queries/m-p/392065#M114115</guid>
      <dc:creator>Vijeta</dc:creator>
      <dc:date>2019-07-18T17:51:38Z</dc:date>
    </item>
    <item>
      <title>Re: Optimizing Tweaks For Slow Queries</title>
      <link>https://community.splunk.com/t5/Splunk-Search/Optimizing-Tweaks-For-Slow-Queries/m-p/392066#M114116</link>
      <description>&lt;P&gt;@Vijeta &lt;BR /&gt;
you are the MOST  awesome person on this forum !!&lt;/P&gt;

&lt;P&gt;It worked like charm .. reduced my search from 32 seconds to 0.08 seconds &lt;/P&gt;</description>
      <pubDate>Thu, 18 Jul 2019 18:19:48 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/Optimizing-Tweaks-For-Slow-Queries/m-p/392066#M114116</guid>
      <dc:creator>reverse</dc:creator>
      <dc:date>2019-07-18T18:19:48Z</dc:date>
    </item>
    <item>
      <title>Re: Optimizing Tweaks For Slow Queries</title>
      <link>https://community.splunk.com/t5/Splunk-Search/Optimizing-Tweaks-For-Slow-Queries/m-p/392067#M114117</link>
      <description>&lt;P&gt;@reverse  I have converted this to answer, please accept the correct answer. I am glad it worked for you &lt;span class="lia-unicode-emoji" title=":slightly_smiling_face:"&gt;🙂&lt;/span&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 18 Jul 2019 18:20:41 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/Optimizing-Tweaks-For-Slow-Queries/m-p/392067#M114117</guid>
      <dc:creator>Vijeta</dc:creator>
      <dc:date>2019-07-18T18:20:41Z</dc:date>
    </item>
    <item>
      <title>Re: Optimizing Tweaks For Slow Queries</title>
      <link>https://community.splunk.com/t5/Splunk-Search/Optimizing-Tweaks-For-Slow-Queries/m-p/392068#M114118</link>
      <description>&lt;P&gt;Note that while this solves the query about how to do the lookup, it's still a horrible way of doing an event count, especially in large environments. I ran an environment that got conservatively 4 billion events a day (still a relatively small environment compared to some of the massive ones out there). People doing stats count on their data caused so many issues (when you have 100's of users doing bad queries, it adds up). You really should learn tstats if you're doing any sort of count of events.&lt;/P&gt;</description>
      <pubDate>Thu, 18 Jul 2019 19:16:09 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/Optimizing-Tweaks-For-Slow-Queries/m-p/392068#M114118</guid>
      <dc:creator>vliggio</dc:creator>
      <dc:date>2019-07-18T19:16:09Z</dc:date>
    </item>
    <item>
      <title>Re: Optimizing Tweaks For Slow Queries</title>
      <link>https://community.splunk.com/t5/Splunk-Search/Optimizing-Tweaks-For-Slow-Queries/m-p/392069#M114119</link>
      <description>&lt;P&gt;@viggio I agree to you, using tstats is a better option instead of creating a lookup and doing stats. But I am not sure what exactly the use case id, will he be using stats count by  fields in actual query or will be performing count on a field which is not part of metadata.&lt;/P&gt;</description>
      <pubDate>Thu, 18 Jul 2019 19:36:42 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/Optimizing-Tweaks-For-Slow-Queries/m-p/392069#M114119</guid>
      <dc:creator>Vijeta</dc:creator>
      <dc:date>2019-07-18T19:36:42Z</dc:date>
    </item>
  </channel>
</rss>

