<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Optimize the query that hits disk usage when computing with stats and percentage in Splunk Search</title>
    <link>https://community.splunk.com/t5/Splunk-Search/How-to-optimize-the-query-that-hits-disk-usage-when-computing/m-p/610464#M212306</link>
    <description>&lt;P&gt;So, for a single globalOpId, in your example&amp;nbsp;&lt;SPAN&gt;0000016, does that mean the list of values is very large for that single row? The table you show has 6 rows, you state 4 rows - can you clarify that you a row in your table means a row you talk about.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;Can you state how many values(point) you have for a SINGLE global code and _time - what sort of count of values do you have?&lt;/P&gt;&lt;P&gt;If you have several million point values for each row, then that is why it is so slow.&lt;/P&gt;&lt;P&gt;Can you clarify what you are trying to do. If your point cardinality is very high, you should not collect values(point) and then split them out again.&lt;/P&gt;&lt;P&gt;Without knowing your data, can you do the first stats by global point _time rather than just global _time and then see if you can work out what your calculations from that data.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
    <pubDate>Tue, 23 Aug 2022 08:38:59 GMT</pubDate>
    <dc:creator>bowesmana</dc:creator>
    <dc:date>2022-08-23T08:38:59Z</dc:date>
    <item>
      <title>How to optimize the query that hits disk usage when computing with stats and percentage?</title>
      <link>https://community.splunk.com/t5/Splunk-Search/How-to-optimize-the-query-that-hits-disk-usage-when-computing/m-p/610428#M212279</link>
      <description>&lt;LI-CODE lang="markup"&gt; index=A host="bd*" OR host="p*" source="/apps/logs/*"
| bin _time span="30m"
| stats values(point) as point values(promotion) as promotionAction BY global  _time
| stats count(eval(promotion="OFFERED")) AS Offers count(eval(promotion="ACCEPTED")) AS Redeemed by _time point=
| eval Take_Rate_Percent=((Redeemed)/(Offers)*100)
| eval Take_Rate_Percent=round(Take_Rate_Percent,2)&lt;/LI-CODE&gt;
&lt;P&gt;&lt;BR /&gt;&lt;BR /&gt;this search is&amp;nbsp; running for 15 min but when i search for more than 15 min it is giving search suspened due to huge data. Please help me to optimize the query.&lt;BR /&gt;&lt;BR /&gt;Thank you in advance&lt;BR /&gt;veeru&lt;/P&gt;</description>
      <pubDate>Tue, 23 Aug 2022 13:44:30 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/How-to-optimize-the-query-that-hits-disk-usage-when-computing/m-p/610428#M212279</guid>
      <dc:creator>Veeru</dc:creator>
      <dc:date>2022-08-23T13:44:30Z</dc:date>
    </item>
    <item>
      <title>Re: Optimize the query that hits disk usage when computing with stats and percentage</title>
      <link>https://community.splunk.com/t5/Splunk-Search/How-to-optimize-the-query-that-hits-disk-usage-when-computing/m-p/610435#M212286</link>
      <description>&lt;P&gt;What is your time range?&lt;/P&gt;&lt;P&gt;How many values(promotion) and values(point) do you expect to have&lt;/P&gt;&lt;P&gt;What is the cardinality of global?&lt;/P&gt;&lt;P&gt;Have you looked at the job inspector to see where the time is being spent?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 23 Aug 2022 05:10:10 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/How-to-optimize-the-query-that-hits-disk-usage-when-computing/m-p/610435#M212286</guid>
      <dc:creator>bowesmana</dc:creator>
      <dc:date>2022-08-23T05:10:10Z</dc:date>
    </item>
    <item>
      <title>Re: Optimize the query that hits disk usage when computing with stats and percentage</title>
      <link>https://community.splunk.com/t5/Splunk-Search/How-to-optimize-the-query-that-hits-disk-usage-when-computing/m-p/610442#M212291</link>
      <description>&lt;P&gt;So my time range is more than 15 days.but issue is for last 24 hours i'm having more than 4 lakh events.&lt;BR /&gt;i want to optimize search to run dashboard panel fast.&lt;/P&gt;</description>
      <pubDate>Tue, 23 Aug 2022 06:05:53 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/How-to-optimize-the-query-that-hits-disk-usage-when-computing/m-p/610442#M212291</guid>
      <dc:creator>Veeru</dc:creator>
      <dc:date>2022-08-23T06:05:53Z</dc:date>
    </item>
    <item>
      <title>Re: Optimize the query that hits disk usage when computing with stats and percentage</title>
      <link>https://community.splunk.com/t5/Splunk-Search/How-to-optimize-the-query-that-hits-disk-usage-when-computing/m-p/610448#M212295</link>
      <description>&lt;P&gt;&lt;a href="https://community.splunk.com/t5/user/viewprofilepage/user-id/6367"&gt;@bowesmana&lt;/a&gt;&amp;nbsp;&lt;BR /&gt;&lt;BR /&gt;when i run for&amp;nbsp; last 7 days&lt;BR /&gt;&lt;SPAN&gt;This search has completed and has returned&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN class=""&gt;337&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;results by scanning&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN class=""&gt;49,396,521&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;events in&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN class=""&gt;539.528&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN&gt;seconds.&lt;BR /&gt;&lt;/SPAN&gt;i want to optimize it to less sec&lt;BR /&gt;&amp;nbsp;stats taking more time can you please help me to take alternative for this.&lt;/P&gt;</description>
      <pubDate>Tue, 23 Aug 2022 06:30:27 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/How-to-optimize-the-query-that-hits-disk-usage-when-computing/m-p/610448#M212295</guid>
      <dc:creator>Veeru</dc:creator>
      <dc:date>2022-08-23T06:30:27Z</dc:date>
    </item>
    <item>
      <title>Re: Optimize the query that hits disk usage when computing with stats and percentage</title>
      <link>https://community.splunk.com/t5/Splunk-Search/How-to-optimize-the-query-that-hits-disk-usage-when-computing/m-p/610454#M212299</link>
      <description>&lt;P&gt;So when you run the first part of the search&lt;/P&gt;&lt;LI-CODE lang="markup"&gt; index=A host="bd*" OR host="p*" source="/apps/logs/*"
| bin _time span="30m"
| stats values(point) as point values(promotion) as promotionAction BY global  _time&lt;/LI-CODE&gt;&lt;P&gt;how many values(point) and values(promotion) do you get per global/_time and how what is the number of rows - if you run this for 24 hours?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 23 Aug 2022 07:07:59 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/How-to-optimize-the-query-that-hits-disk-usage-when-computing/m-p/610454#M212299</guid>
      <dc:creator>bowesmana</dc:creator>
      <dc:date>2022-08-23T07:07:59Z</dc:date>
    </item>
    <item>
      <title>Re: Optimize the query that hits disk usage when computing with stats and percentage</title>
      <link>https://community.splunk.com/t5/Splunk-Search/How-to-optimize-the-query-that-hits-disk-usage-when-computing/m-p/610456#M212300</link>
      <description>&lt;P&gt;&lt;a href="https://community.splunk.com/t5/user/viewprofilepage/user-id/6367"&gt;@bowesmana&lt;/a&gt;&amp;nbsp;&lt;BR /&gt;For&amp;nbsp;&lt;/P&gt;&lt;PRE&gt;| stats values(point) as point values(promotion) as promotionAction BY global  _time&lt;/PRE&gt;&lt;P&gt;i'm getting&amp;nbsp;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN class=""&gt;6,446,807&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;results by scanning&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN class=""&gt;6,773,378&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;events in&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN class=""&gt;78.521&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN&gt;seconds&lt;BR /&gt;for&amp;nbsp; 4 rows&lt;BR /&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;globalOpId _time pointBankCode promotionAction&lt;/P&gt;&lt;TABLE&gt;&lt;TBODY&gt;&lt;TR&gt;&lt;TD width="337px"&gt;0000016&lt;/TD&gt;&lt;TD width="166px"&gt;2022-08-22 19:00:00&lt;/TD&gt;&lt;TD width="94px"&gt;&amp;nbsp;&lt;/TD&gt;&lt;TD width="104px"&gt;&amp;nbsp;&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TD width="337px"&gt;000003b&lt;/TD&gt;&lt;TD width="166px"&gt;2022-08-22 14:00:00&lt;/TD&gt;&lt;TD width="94px"&gt;&amp;nbsp;&lt;/TD&gt;&lt;TD width="104px"&gt;&amp;nbsp;&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TD width="337px"&gt;00000bb4&lt;/TD&gt;&lt;TD width="166px"&gt;2022-08-22 07:00:00&lt;/TD&gt;&lt;TD width="94px"&gt;&amp;nbsp;&lt;/TD&gt;&lt;TD width="104px"&gt;ACCEPTED&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TD width="337px"&gt;00000c41&lt;/TD&gt;&lt;TD width="166px"&gt;2022-08-22 05:30:00&lt;/TD&gt;&lt;TD width="94px"&gt;&amp;nbsp;&lt;/TD&gt;&lt;TD width="104px"&gt;&amp;nbsp;&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TD width="337px"&gt;00001136&lt;/TD&gt;&lt;TD width="166px"&gt;2022-08-22 21:00:00&lt;/TD&gt;&lt;TD width="94px"&gt;&amp;nbsp;&lt;/TD&gt;&lt;TD width="104px"&gt;&amp;nbsp;&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TD width="337px"&gt;000015e7&lt;/TD&gt;&lt;TD width="166px"&gt;2022-08-22 14:30:00&lt;/TD&gt;&lt;TD width="94px"&gt;&amp;nbsp;&lt;/TD&gt;&lt;TD width="104px"&gt;&amp;nbsp;&lt;/TD&gt;&lt;/TR&gt;&lt;/TBODY&gt;&lt;/TABLE&gt;&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Tue, 23 Aug 2022 07:29:18 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/How-to-optimize-the-query-that-hits-disk-usage-when-computing/m-p/610456#M212300</guid>
      <dc:creator>Veeru</dc:creator>
      <dc:date>2022-08-23T07:29:18Z</dc:date>
    </item>
    <item>
      <title>Re: Optimize the query that hits disk usage when computing with stats and percentage</title>
      <link>https://community.splunk.com/t5/Splunk-Search/How-to-optimize-the-query-that-hits-disk-usage-when-computing/m-p/610464#M212306</link>
      <description>&lt;P&gt;So, for a single globalOpId, in your example&amp;nbsp;&lt;SPAN&gt;0000016, does that mean the list of values is very large for that single row? The table you show has 6 rows, you state 4 rows - can you clarify that you a row in your table means a row you talk about.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;Can you state how many values(point) you have for a SINGLE global code and _time - what sort of count of values do you have?&lt;/P&gt;&lt;P&gt;If you have several million point values for each row, then that is why it is so slow.&lt;/P&gt;&lt;P&gt;Can you clarify what you are trying to do. If your point cardinality is very high, you should not collect values(point) and then split them out again.&lt;/P&gt;&lt;P&gt;Without knowing your data, can you do the first stats by global point _time rather than just global _time and then see if you can work out what your calculations from that data.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 23 Aug 2022 08:38:59 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/How-to-optimize-the-query-that-hits-disk-usage-when-computing/m-p/610464#M212306</guid>
      <dc:creator>bowesmana</dc:creator>
      <dc:date>2022-08-23T08:38:59Z</dc:date>
    </item>
  </channel>
</rss>

