<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Recursive Query over Time in Splunk Search</title>
    <link>https://community.splunk.com/t5/Splunk-Search/Recursive-Query-over-Time/m-p/540944#M153107</link>
    <description>&lt;P&gt;Thank you for the help and introducing me to the untable command.&amp;nbsp; This was very close and I am playing with it to see if I can solve the one remaining issue.&amp;nbsp; The summary over the 7 days is perfect, but I need to dedup within that 7 days and then aggregate.&amp;nbsp; For instance:&lt;/P&gt;&lt;P&gt;Day1: a,b,c&lt;/P&gt;&lt;P&gt;Day 2: a,b,c,d&lt;/P&gt;&lt;P&gt;Day 3: a,b,f&lt;/P&gt;&lt;P&gt;Day 4-7: a,b,c&lt;/P&gt;&lt;P&gt;I should end up across those 7 days with a distinct count of 5 (a,b,c,d,f).&amp;nbsp; I don't think I can reduce after the timechart so am playing with eventstats there.&amp;nbsp; Much appreciated if you have further time/guidance.&amp;nbsp;&amp;nbsp;&lt;/P&gt;</description>
    <pubDate>Tue, 23 Feb 2021 01:43:29 GMT</pubDate>
    <dc:creator>rneel</dc:creator>
    <dc:date>2021-02-23T01:43:29Z</dc:date>
    <item>
      <title>Recursive Query over Time</title>
      <link>https://community.splunk.com/t5/Splunk-Search/Recursive-Query-over-Time/m-p/540558#M152935</link>
      <description>&lt;P&gt;I am searching for the best way to create a time chart that is created from queries that have to evaluate data over a period of time.&amp;nbsp; The item I am counting is vulnerability data and that data is built from scan outputs that occur at different times across different assets throughout the week.&amp;nbsp; So for instance:&lt;/P&gt;&lt;P&gt;If I ran this query over the past 7 days for today:&lt;/P&gt;&lt;DIV class="c-virtual_list__item"&gt;&lt;DIV class="c-message_kit__background p-message_pane_message__message c-message_kit__message p-message_pane_message__message--showing_context_bar"&gt;&lt;DIV class="c-message_kit__hover"&gt;&lt;DIV class="c-message_kit__actions c-message_kit__actions--above"&gt;&lt;DIV class="c-message_kit__gutter"&gt;&lt;DIV class="c-message_kit__gutter__right"&gt;&lt;DIV class="c-message_kit__blocks c-message_kit__blocks--rich_text"&gt;&lt;DIV class="c-message__message_blocks c-message__message_blocks--rich_text"&gt;&lt;DIV class="p-block_kit_renderer"&gt;&lt;DIV class="p-block_kit_renderer__block_wrapper p-block_kit_renderer__block_wrapper--first"&gt;&lt;DIV class="p-rich_text_block"&gt;&amp;nbsp;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;DIV class="c-virtual_list__item"&gt;&lt;DIV class="c-message_kit__background p-message_pane_message__message c-message_kit__message p-message_pane_message__message--showing_context_bar"&gt;&lt;DIV class="c-message_kit__hover"&gt;&lt;DIV class="c-message_kit__actions c-message_kit__actions--above"&gt;&lt;DIV class="c-message_kit__gutter"&gt;&lt;DIV class="c-message_kit__gutter__right"&gt;&lt;DIV class="c-message_kit__blocks c-message_kit__blocks--rich_text"&gt;&lt;DIV class="c-message__message_blocks c-message__message_blocks--rich_text"&gt;&lt;DIV class="p-block_kit_renderer"&gt;&lt;DIV class="p-block_kit_renderer__block_wrapper p-block_kit_renderer__block_wrapper--first"&gt;&lt;DIV class="p-rich_text_block"&gt;&lt;DIV class="p-rich_text_section"&gt;index="qualys" sourcetype="qualys:hostdetection" TYPE="CONFIRMED"&amp;nbsp; OS=[OS]&amp;nbsp; PATCHABLE=YES | dedup HOST_ID QID&amp;nbsp; sortby -_time | search NOT STATUS=FIXED | stats count by severity&lt;/DIV&gt;&lt;DIV class="p-rich_text_section"&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV class="p-rich_text_section"&gt;I would get back information on all open vulnerabilities by severity (critical, high, medium, low) that are considered opened.&lt;/DIV&gt;&lt;DIV class="p-rich_text_section"&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV class="p-rich_text_section"&gt;I now need to show that trend, but over a 14 day period in a timechart - with the issue being that any one day has to be a 7 day lookback to get the accurate total.&amp;nbsp; I thought of using a macro then doing an append, but that seems expensive.&amp;nbsp; I also considered using the query over and over with&amp;nbsp;earliest=-7d@d latest=[-appropriate day count].&amp;nbsp;&lt;/DIV&gt;&lt;DIV class="p-rich_text_section"&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV class="p-rich_text_section"&gt;I am sure there is a more elegant way though.&amp;nbsp; Any advice is greatly appreciated.&amp;nbsp;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;DIV class="c-virtual_list__item"&gt;&lt;DIV class="c-message_kit__background p-message_pane_message__message c-message_kit__message p-message_pane_message__message--last p-message_pane_message__message--showing_context_bar"&gt;&lt;DIV class="c-message_kit__hover"&gt;&lt;DIV class="c-message_kit__actions c-message_kit__actions--above"&gt;&lt;DIV class="c-message_kit__gutter"&gt;&lt;DIV class="c-message_kit__gutter__right"&gt;&lt;DIV class="c-message_kit__blocks c-message_kit__blocks--rich_text"&gt;&lt;DIV class="c-message__message_blocks c-message__message_blocks--rich_text"&gt;&lt;DIV class="p-block_kit_renderer"&gt;&lt;DIV class="p-block_kit_renderer__block_wrapper p-block_kit_renderer__block_wrapper--first"&gt;&lt;DIV class="p-rich_text_block"&gt;&lt;DIV class="p-rich_text_section"&gt;&amp;nbsp;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;</description>
      <pubDate>Fri, 19 Feb 2021 16:59:44 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/Recursive-Query-over-Time/m-p/540558#M152935</guid>
      <dc:creator>rneel</dc:creator>
      <dc:date>2021-02-19T16:59:44Z</dc:date>
    </item>
    <item>
      <title>Re: Recursive Query over Time</title>
      <link>https://community.splunk.com/t5/Splunk-Search/Recursive-Query-over-Time/m-p/540666#M152967</link>
      <description>&lt;P&gt;&lt;a href="https://community.splunk.com/t5/user/viewprofilepage/user-id/215711"&gt;@rneel&lt;/a&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;You can use streamstats to generate rolling summarizations. I'll use the _internal index in this example, but you can modify it to use your base search:&lt;/P&gt;&lt;P&gt;index=_internal sourcetype=splunkd source=*/splunkd.log* earliest=-14d@d latest=@d&lt;BR /&gt;| timechart fixedrange=f span=1d count as subtotal by log_level&lt;BR /&gt;| untable _time log_level subtotal&lt;BR /&gt;| streamstats time_window=7d sum(subtotal) as total by log_level&lt;BR /&gt;| timechart span=1d max(total) as total by log_level&lt;BR /&gt;| where _time&amp;gt;relative_time(relative_time(now(), "@d"), "-7d@d")&lt;/P&gt;&lt;P&gt;I want a rolling count for the last 7 days, so I've expanded my time range to 14 days to ensure day 1 includes 7 days of prior data.&lt;/P&gt;&lt;P&gt;From there, I've reduced the initial result set to a daily summary using timechart followed by untable.&lt;/P&gt;&lt;P&gt;Then I've used streamstats to generate a rolling 7 day total from the daily subtotal.&lt;/P&gt;&lt;P&gt;Finally, I've summarized the total by the field of interest and truncated the results to the last 7 days. (I've used the max aggregation for simplicity. There should only be one total value per log_level per day.)&lt;/P&gt;</description>
      <pubDate>Sat, 20 Feb 2021 19:36:44 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/Recursive-Query-over-Time/m-p/540666#M152967</guid>
      <dc:creator>tscroggins</dc:creator>
      <dc:date>2021-02-20T19:36:44Z</dc:date>
    </item>
    <item>
      <title>Re: Recursive Query over Time</title>
      <link>https://community.splunk.com/t5/Splunk-Search/Recursive-Query-over-Time/m-p/540944#M153107</link>
      <description>&lt;P&gt;Thank you for the help and introducing me to the untable command.&amp;nbsp; This was very close and I am playing with it to see if I can solve the one remaining issue.&amp;nbsp; The summary over the 7 days is perfect, but I need to dedup within that 7 days and then aggregate.&amp;nbsp; For instance:&lt;/P&gt;&lt;P&gt;Day1: a,b,c&lt;/P&gt;&lt;P&gt;Day 2: a,b,c,d&lt;/P&gt;&lt;P&gt;Day 3: a,b,f&lt;/P&gt;&lt;P&gt;Day 4-7: a,b,c&lt;/P&gt;&lt;P&gt;I should end up across those 7 days with a distinct count of 5 (a,b,c,d,f).&amp;nbsp; I don't think I can reduce after the timechart so am playing with eventstats there.&amp;nbsp; Much appreciated if you have further time/guidance.&amp;nbsp;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 23 Feb 2021 01:43:29 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/Recursive-Query-over-Time/m-p/540944#M153107</guid>
      <dc:creator>rneel</dc:creator>
      <dc:date>2021-02-23T01:43:29Z</dc:date>
    </item>
    <item>
      <title>Re: Recursive Query over Time</title>
      <link>https://community.splunk.com/t5/Splunk-Search/Recursive-Query-over-Time/m-p/540945#M153108</link>
      <description>&lt;P&gt;I slightly misspoke in the last post.&amp;nbsp; While it would be a count of 5 total, it would actual be a subcount by severity.&amp;nbsp; So for instance:&lt;/P&gt;&lt;P&gt;a=critical&lt;/P&gt;&lt;P&gt;b= high&lt;/P&gt;&lt;P&gt;c=high&lt;/P&gt;&lt;P&gt;d=medium&lt;/P&gt;&lt;P&gt;f=low&lt;/P&gt;&lt;P&gt;Would then actually give me a single set for that view across the 7 days (which would be an accurate picture of the current state) of 1 (critical), 2 (high), 1 (medium), 1 (low)&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 23 Feb 2021 01:47:22 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/Recursive-Query-over-Time/m-p/540945#M153108</guid>
      <dc:creator>rneel</dc:creator>
      <dc:date>2021-02-23T01:47:22Z</dc:date>
    </item>
    <item>
      <title>Re: Recursive Query over Time</title>
      <link>https://community.splunk.com/t5/Splunk-Search/Recursive-Query-over-Time/m-p/540949#M153110</link>
      <description>&lt;P&gt;&lt;a href="https://community.splunk.com/t5/user/viewprofilepage/user-id/215711"&gt;@rneel&lt;/a&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;If I understand correctly, that's actually much simpler:&lt;/P&gt;&lt;P&gt;index=_internal sourcetype=splunkd source=*/splunkd.log* earliest=-7d@d latest=@d&lt;BR /&gt;| bin _time span=1d&lt;BR /&gt;| stats dc(_time) as days by log_level&lt;/P&gt;&lt;P&gt;In this example, I've binned _time into days and then counted the distinct number of days per log level. Just replace the base search and log_level field with your data.&lt;/P&gt;&lt;P&gt;If your severity field is indexed, you can use tstats for better performance. Here's an example using the source field:&lt;/P&gt;&lt;P&gt;| tstats values(source) as source where index=_internal sourcetype=splunkd earliest=-7d@d latest=@d by _time span=1d&lt;BR /&gt;| mvexpand source&lt;BR /&gt;| stats dc(_time) by source&lt;/P&gt;&lt;P&gt;If your severity is not index but does exist in raw data as e.g. severity=critical, you can combine tstats with TERM and PREFIX for similar performance gains:&lt;/P&gt;&lt;P&gt;| tstats count where index=_internal sourcetype=scheduler TERM(priority=*) earliest=-7d@d latest=@d by _time span=1d PREFIX(priority=)&lt;BR /&gt;| rename "priority=" as priority&lt;BR /&gt;| stats dc(_time) as days by priority&lt;/P&gt;&lt;P&gt;The key is the raw data containing some field name and some value separated by a minor breaker. For example, raw data containing severity_critical could be parsed with:&lt;/P&gt;&lt;P&gt;| tstats count where index=main TERM(severity_*) earliest=-7d@d latest=@d by _time span=1d PREFIX(severity_)&lt;BR /&gt;| rename "severity_" as severity&lt;BR /&gt;| stats dc(_time) as days by severity&lt;/P&gt;&lt;P&gt;PREFIX is very powerful!&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 23 Feb 2021 03:24:09 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/Recursive-Query-over-Time/m-p/540949#M153110</guid>
      <dc:creator>tscroggins</dc:creator>
      <dc:date>2021-02-23T03:24:09Z</dc:date>
    </item>
    <item>
      <title>Re: Recursive Query over Time</title>
      <link>https://community.splunk.com/t5/Splunk-Search/Recursive-Query-over-Time/m-p/541102#M153174</link>
      <description>&lt;P&gt;So I would like to mark all of these as answers because I learned something in each bit, then you could mark me as a poor explainer for not being crisp on the challenge!&amp;nbsp; If you are up for it I wanted to take one more shot.&lt;/P&gt;&lt;P&gt;There are agents across machines that bring in data every day, multiple times a day.&amp;nbsp; There are also scans that take place across the environment that bring back similar data, but only occur every few days.&amp;nbsp; A view of any one system is the aggregate of the distinct data for a system associated with the information collected from both the host and network.&amp;nbsp; All of this data is organized in the same manner in splunk.&amp;nbsp; Because data comes in different groupings, an accurate view at any moment in time requires that you look back several days.&amp;nbsp; This is so you can make sure and count those issues that show up in the data less frequently.&amp;nbsp;&amp;nbsp;&lt;/P&gt;&lt;P&gt;As example – data for 4 days might look like (using a 3 day look back to make the amount of data less):&lt;/P&gt;&lt;P&gt;Day 1 – just information from host scans are fed in, data from splunk would contain:&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;System A (e.g. system name), issue A (e.g. vuln xxx), severity (e.g. high/medium/low), status (e.g. opened, closed etc..)&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;System B, issue A, severity, status&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;System C, issue C, severity, status&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;System A, issue A, severity, status&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;System B, issue A, severity, status&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;System C, issue C, severity, status&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;** Note that the data is collected multiple times&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Day 2 – similar result with systems reporting in multiple times a day&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;System A, issue A, severity, status&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;System B, issue A, severity, status&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;System C, issue C, severity, status&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;System A, issue A, severity, status&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;System B, issue A, severity, status&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;System C, issue C, severity, status&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Day 3 – Similar but we now have something introduced from the network scan (All in Blue)&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;System A, issue A, severity, status&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;System B, issue A, severity, status&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;System C, issue C, severity, status&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;System A, issue A, severity, status&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;System B, issue A, severity, status&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;System C, issue C, severity, status&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;System A, issue D, severity, status&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;FONT color="#00CCFF"&gt;&lt;STRONG&gt;System A, issue E, severity, status&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;&lt;FONT color="#00CCFF"&gt;&lt;STRONG&gt;System B, issue F, severity, status&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;&lt;FONT color="#00CCFF"&gt;&lt;STRONG&gt;System C, issue G, severity, status&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;Day 4 – similar result with systems reporting in multiple times a day&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;System A, issue A, severity, status&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;System B, issue A, severity, status&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;System C, issue C, severity, status&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;System A, issue A, severity, status&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;System B, issue A, severity, status&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;System C, issue C, severity, status&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;The goal is to go across 3 days to get the number of distinct issues on a system then aggregate the issue counts across all systems into counts by severity.&amp;nbsp; On any one system you should end up with an issue counted only once, but across systems the issue may show multiple times. &amp;nbsp;&amp;nbsp;I can accomplish this with no problem when I am looking back to create an output for any single day.&amp;nbsp; But my skills quickly deteriorate if I want to show a historical view of open issues in the environment in a time chart.&amp;nbsp; Your first answer:&lt;/P&gt;&lt;P&gt;index=_internal sourcetype=splunkd source=*/splunkd.log* earliest=-14d@d latest=@d&lt;BR /&gt;| timechart fixedrange=f span=1d count as subtotal by log_level&lt;BR /&gt;| untable _time log_level subtotal&lt;BR /&gt;| streamstats time_window=7d sum(subtotal) as total by log_level&lt;BR /&gt;| timechart span=1d max(total) as total by log_level&lt;BR /&gt;| where _time&amp;gt;relative_time(relative_time(now(), "@d"), "-7d@d")&lt;/P&gt;&lt;P&gt;worked beautifully, with the one exception that it would count a single issue on a host multiple times as it rolled back across the days.&amp;nbsp; I did some digging and did not see a way to deduplicate later in the process so that for any one set of days you are counting each open system issue only once.&amp;nbsp; So in the new example:&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;Day one would be a lookback of days 1-3.&amp;nbsp; For each system you would only count issue xx once across that time span.&amp;nbsp; You would then count each individual systems issues and group that by severity.&amp;nbsp; So System A would only show Issue A once even though it appeared 3 times.&amp;nbsp; System B might also have issue A, in which case it would be counted once as well.&amp;nbsp;&lt;/LI&gt;&lt;LI&gt;Day two would be a lookback of days 2-4.&lt;/LI&gt;&lt;LI&gt;Etc….&lt;/LI&gt;&lt;/UL&gt;</description>
      <pubDate>Wed, 24 Feb 2021 02:23:14 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/Recursive-Query-over-Time/m-p/541102#M153174</guid>
      <dc:creator>rneel</dc:creator>
      <dc:date>2021-02-24T02:23:14Z</dc:date>
    </item>
    <item>
      <title>Re: Recursive Query over Time</title>
      <link>https://community.splunk.com/t5/Splunk-Search/Recursive-Query-over-Time/m-p/541635#M153351</link>
      <description>&lt;P&gt;It helps me to visualize the summary time ranges using earliest and latest. E.g. For a rolling window of three days covering the last three days:&lt;/P&gt;&lt;P&gt;1: earliest=-3d@d latest=-0d@d&lt;BR /&gt;2: earliest=-4d@d latest=-1d@d&lt;BR /&gt;3: earliest=-5d@d latest=-2d@d&lt;/P&gt;&lt;P&gt;-0d@d is equivalent to @d, but using -0d@d keeps the formatting consistent.&lt;/P&gt;&lt;P&gt;The base search earliest and latest values are the range (the total span) of the desired earliest and latest values:&lt;/P&gt;&lt;P&gt;tag=report tag=vulnerability earliest=-5d@d latest=-0d@d&lt;/P&gt;&lt;P&gt;At the end of the search, we'll add a where command to truncate search results to the last three summarized days:&lt;/P&gt;&lt;P&gt;| where _time&amp;gt;=relative_time(relative_time(now(), "-0d@d"), "-3d@d")&lt;/P&gt;&lt;P&gt;We first summarize the distinct count of dest by day, signature, and severity. (Substitute signature with another fields or fields as needed to uniquely identify a vulnerability.) I.e. We count each occurrence of dest once per day per signature and severity combination:&lt;/P&gt;&lt;P&gt;tag=report tag=vulnerability earliest=-5d@d latest=-0d@d&lt;BR /&gt;| bin _time span=1d&lt;BR /&gt;| stats dc(dest) as subtotal by _time signature severity&lt;/P&gt;&lt;P&gt;We use streamstats to summarize the subtotal by severity over a span of three days:&lt;/P&gt;&lt;P&gt;tag=report tag=vulnerability earliest=-5d@d latest=-0d@d&lt;BR /&gt;| bin _time span=1d&lt;BR /&gt;| stats dc(dest) as subtotal by _time signature severity&lt;BR /&gt;| streamstats time_window=3d sum(subtotal) as total by severity&lt;/P&gt;&lt;P&gt;I previously used timechart to pivot the results over _time, but you can also use xyseries:&lt;/P&gt;&lt;P&gt;tag=report tag=vulnerability earliest=-5d@d latest=-0d@d&lt;BR /&gt;| bin _time span=1d&lt;BR /&gt;| stats dc(dest) as subtotal by _time signature severity&lt;BR /&gt;| streamstats time_window=3d sum(subtotal) as total by severity&lt;BR /&gt;| xyseries _time severity total&lt;/P&gt;&lt;P&gt;Finally, add the where command we prepared earlier:&lt;/P&gt;&lt;P&gt;tag=report tag=vulnerability earliest=-5d@d latest=-0d@d&lt;BR /&gt;| bin _time span=1d&lt;BR /&gt;| stats dc(dest) as subtotal by _time signature severity&lt;BR /&gt;| streamstats time_window=3d sum(subtotal) as total by severity&lt;BR /&gt;| xyseries _time severity total&lt;BR /&gt;| where _time&amp;gt;=relative_time(relative_time(now(), "-0d@d"), "-3d@d")&lt;/P&gt;&lt;P&gt;The result is the distinct count of dest values by signature and severity over _time.&lt;/P&gt;&lt;P&gt;If you'd like, you can a SUBTOTAL column for each day and a TOTAL row for all days:&lt;/P&gt;&lt;P&gt;tag=report tag=vulnerability earliest=-5d@d latest=-0d@d&lt;BR /&gt;| bin _time span=1d&lt;BR /&gt;| stats dc(dest) as subtotal by _time signature severity&lt;BR /&gt;| streamstats time_window=3d sum(subtotal) as total by severity&lt;BR /&gt;| xyseries _time severity total&lt;BR /&gt;| where _time&amp;gt;=relative_time(relative_time(now(), "-0d@d"), "-3d@d")&lt;BR /&gt;| addtotals fieldname=SUBTOTAL&lt;BR /&gt;| addcoltotals labelfield=_time label=TOTAL&lt;/P&gt;&lt;P&gt;You can also refactor the base search and stats to use the Vulnerabilities data model and tstats.&lt;/P&gt;&lt;P&gt;With or without acceleration:&lt;/P&gt;&lt;P&gt;| tstats dc(Vulnerabilities.dest) as subtotal from datamodel=Vulnerabilities where earliest=-5d@d latest=-0d@d by _time span=1d Vulnerabilities.signature Vulnerabilities.severity&lt;BR /&gt;| streamstats time_window=3d sum(subtotal) as total by Vulnerabilities.severity&lt;BR /&gt;| xyseries _time Vulnerabilities.severity total&lt;BR /&gt;| where _time&amp;gt;=relative_time(relative_time(now(), "-0d@d"), "-3d@d")&lt;BR /&gt;| addtotals fieldname=SUBTOTAL&lt;BR /&gt;| addcoltotals labelfield=_time label=TOTAL&lt;/P&gt;&lt;P&gt;With accelerated summaries only:&lt;/P&gt;&lt;P&gt;| tstats summariesonly=t dc(Vulnerabilities.dest) as subtotal from datamodel=Vulnerabilities where earliest=-5d@d latest=-0d@d by _time span=1d Vulnerabilities.signature Vulnerabilities.severity&lt;BR /&gt;| streamstats time_window=3d sum(subtotal) as total by Vulnerabilities.severity&lt;BR /&gt;| xyseries _time Vulnerabilities.severity total&lt;BR /&gt;| where _time&amp;gt;=relative_time(relative_time(now(), "-0d@d"), "-3d@d")&lt;BR /&gt;| addtotals fieldname=SUBTOTAL&lt;BR /&gt;| addcoltotals labelfield=_time label=TOTAL&lt;/P&gt;</description>
      <pubDate>Sat, 27 Feb 2021 19:05:47 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/Recursive-Query-over-Time/m-p/541635#M153351</guid>
      <dc:creator>tscroggins</dc:creator>
      <dc:date>2021-02-27T19:05:47Z</dc:date>
    </item>
    <item>
      <title>Re: Recursive Query over Time</title>
      <link>https://community.splunk.com/t5/Splunk-Search/Recursive-Query-over-Time/m-p/541681#M153372</link>
      <description>&lt;P&gt;Thank you for all the time you spent helping with this!&lt;/P&gt;</description>
      <pubDate>Sun, 28 Feb 2021 20:48:37 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/Recursive-Query-over-Time/m-p/541681#M153372</guid>
      <dc:creator>rneel</dc:creator>
      <dc:date>2021-02-28T20:48:37Z</dc:date>
    </item>
  </channel>
</rss>

