<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Timechart: p99 requests/min by client in Splunk Search</title>
    <link>https://community.splunk.com/t5/Splunk-Search/Timechart-p99-requests-min-by-client/m-p/467936#M131736</link>
    <description>&lt;PRE&gt;&lt;CODE&gt; application="my-app" index="my-index" request client_ip="*" user_agent="*" request="*" kube_pod="web-*"
| bin _time span=1s 
| stats count as count_per_sec by _time client_ip
| stats avg(count_per_sec) as count_per_sec  by _time
&lt;/CODE&gt;&lt;/PRE&gt;

&lt;P&gt;Try this and check &lt;EM&gt;job inspector&lt;/EM&gt; .&lt;/P&gt;</description>
    <pubDate>Mon, 13 Apr 2020 22:15:56 GMT</pubDate>
    <dc:creator>to4kawa</dc:creator>
    <dc:date>2020-04-13T22:15:56Z</dc:date>
    <item>
      <title>Timechart: p99 requests/min by client</title>
      <link>https://community.splunk.com/t5/Splunk-Search/Timechart-p99-requests-min-by-client/m-p/467933#M131733</link>
      <description>&lt;P&gt;I have a dataset of Nginx (a web server) request logs. Each entry contains a &lt;CODE&gt;client_ip&lt;/CODE&gt;. I want to impose some rate limiting, but I want to see what my current traffic patterns are, so my rate limits don't impede the current regular traffic. There are two rate limit settings available, one expressed as a limit per second, and a limit per minute.&lt;/P&gt;

&lt;P&gt;I would like to calculate the requests/second rate of each &lt;CODE&gt;client_ip&lt;/CODE&gt; for each second. I would like to then aggregate (playing around with different aggregation functions, like avg, median, p90, p99, max, etc.) those values per-client_ip values into a &lt;CODE&gt;timechart&lt;/CODE&gt;.&lt;/P&gt;

&lt;P&gt;Put another way, I would like to make this &lt;CODE&gt;timechart&lt;/CODE&gt;  have one data point per minute, each of which shows the p99 request/seconds among all the client_ips for that minute. For example, that would give me a per-second rate limit that would make 99% pass, and block the top 1%.&lt;/P&gt;

&lt;P&gt;I thought this would do it:&lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;application="my-app" index="my-index" request client_ip="*" user_agent="*" request="*" kube_pod="web-*"
| timechart span=1s count as count_per_sec by client_ip
| timechart span=1s avg(count_per_sec)
&lt;/CODE&gt;&lt;/PRE&gt;

&lt;P&gt;But all of the &lt;CODE&gt;count_per_sec&lt;/CODE&gt; values come out blank under the "statistics" tab.&lt;/P&gt;</description>
      <pubDate>Tue, 07 Apr 2020 18:43:53 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/Timechart-p99-requests-min-by-client/m-p/467933#M131733</guid>
      <dc:creator>amomchilov</dc:creator>
      <dc:date>2020-04-07T18:43:53Z</dc:date>
    </item>
    <item>
      <title>Re: Timechart: p99 requests/min by client</title>
      <link>https://community.splunk.com/t5/Splunk-Search/Timechart-p99-requests-min-by-client/m-p/467934#M131734</link>
      <description>&lt;PRE&gt;&lt;CODE&gt;application="my-app" index="my-index" request client_ip="*" user_agent="*" request="*" kube_pod="web-*"
| timechart span=1s count as count_per_sec by client_ip
| untable _time client_ip count_per_sec 
| stats avg(count_per_sec) as count_per_sec  by _time
&lt;/CODE&gt;&lt;/PRE&gt;

&lt;P&gt;the result of  &lt;CODE&gt;| timechart span=1s count as count_per_sec by client_ip&lt;/CODE&gt; is following:&lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;_time X.X.X.X Y.Y.Y.Y Z.Z.Z.Z ...
aa:bb:00 1 2 3 ...
dd:ee:01 4 5 6 ..
...
&lt;/CODE&gt;&lt;/PRE&gt;

&lt;P&gt;&lt;EM&gt;count_per_sec&lt;/EM&gt; field is nothing. &lt;CODE&gt;| timechart span=1s avg(count_per_sec)&lt;/CODE&gt; can't work.&lt;/P&gt;</description>
      <pubDate>Wed, 30 Sep 2020 04:53:49 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/Timechart-p99-requests-min-by-client/m-p/467934#M131734</guid>
      <dc:creator>to4kawa</dc:creator>
      <dc:date>2020-09-30T04:53:49Z</dc:date>
    </item>
    <item>
      <title>Re: Timechart: p99 requests/min by client</title>
      <link>https://community.splunk.com/t5/Splunk-Search/Timechart-p99-requests-min-by-client/m-p/467935#M131735</link>
      <description>&lt;P&gt;Thanks man, this worked wonderfully! The min/median/p99 values of this were heavily skewed by the IPs with 0 requests/min (which comprise &lt;EM&gt;most&lt;/EM&gt; of the data points), so I fixed it by popping in a &lt;CODE&gt;| where count_per_s != 0&lt;/CODE&gt;. This had a nice side effect of drastically reducing the memory use.  Do you know of any others ways to decrease the memory usage of this? For time scales above a few hours i still get EOM errors (using like 30 GB, the limit for us is 3 GB lol).&lt;/P&gt;</description>
      <pubDate>Mon, 13 Apr 2020 18:13:08 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/Timechart-p99-requests-min-by-client/m-p/467935#M131735</guid>
      <dc:creator>amomchilov</dc:creator>
      <dc:date>2020-04-13T18:13:08Z</dc:date>
    </item>
    <item>
      <title>Re: Timechart: p99 requests/min by client</title>
      <link>https://community.splunk.com/t5/Splunk-Search/Timechart-p99-requests-min-by-client/m-p/467936#M131736</link>
      <description>&lt;PRE&gt;&lt;CODE&gt; application="my-app" index="my-index" request client_ip="*" user_agent="*" request="*" kube_pod="web-*"
| bin _time span=1s 
| stats count as count_per_sec by _time client_ip
| stats avg(count_per_sec) as count_per_sec  by _time
&lt;/CODE&gt;&lt;/PRE&gt;

&lt;P&gt;Try this and check &lt;EM&gt;job inspector&lt;/EM&gt; .&lt;/P&gt;</description>
      <pubDate>Mon, 13 Apr 2020 22:15:56 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/Timechart-p99-requests-min-by-client/m-p/467936#M131736</guid>
      <dc:creator>to4kawa</dc:creator>
      <dc:date>2020-04-13T22:15:56Z</dc:date>
    </item>
    <item>
      <title>Re: Timechart: p99 requests/min by client</title>
      <link>https://community.splunk.com/t5/Splunk-Search/Timechart-p99-requests-min-by-client/m-p/467937#M131737</link>
      <description>&lt;P&gt;Wow, that's a night and day difference! Whereas before I couldn't squeeze out more than a 30 minute window, this code let me go back over 7 days! I thought &lt;CODE&gt;timeseries&lt;/CODE&gt; worked like &lt;CODE&gt;bin&lt;/CODE&gt; and &lt;CODE&gt;stats&lt;/CODE&gt; together, so I'm surprised there such a big difference. Is &lt;CODE&gt;untable&lt;/CODE&gt; the culprit? I really know how to interpret the job inspection. The profiler chart shows &lt;CODE&gt;startup.handoff&lt;/CODE&gt; eating up pretty much all of the time, and there are basically no other big "chunks"&lt;/P&gt;</description>
      <pubDate>Tue, 14 Apr 2020 14:45:04 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/Timechart-p99-requests-min-by-client/m-p/467937#M131737</guid>
      <dc:creator>amomchilov</dc:creator>
      <dc:date>2020-04-14T14:45:04Z</dc:date>
    </item>
    <item>
      <title>Re: Timechart: p99 requests/min by client</title>
      <link>https://community.splunk.com/t5/Splunk-Search/Timechart-p99-requests-min-by-client/m-p/467938#M131738</link>
      <description>&lt;P&gt;The &lt;CODE&gt;stats&lt;/CODE&gt; is simple aggregation and easy optimization. &lt;BR /&gt;
Using fields is little. But &lt;CODE&gt;timechart&lt;/CODE&gt; is search all period and  &lt;CODE&gt;untalbe&lt;/CODE&gt; wait till &lt;CODE&gt;timechart&lt;/CODE&gt; end .&lt;BR /&gt;
So, &lt;CODE&gt;stats&lt;/CODE&gt; is faster.&lt;/P&gt;</description>
      <pubDate>Tue, 14 Apr 2020 20:05:10 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/Timechart-p99-requests-min-by-client/m-p/467938#M131738</guid>
      <dc:creator>to4kawa</dc:creator>
      <dc:date>2020-04-14T20:05:10Z</dc:date>
    </item>
  </channel>
</rss>

