<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Break up search over large timeframe into searches of smaller timeframes in Splunk Search</title>
    <link>https://community.splunk.com/t5/Splunk-Search/Break-up-search-over-large-timeframe-into-searches-of-smaller/m-p/580136#M202146</link>
    <description>&lt;P&gt;Does this mean that each day would run separately and in a certain order?&amp;nbsp; Do they run at the same time?&amp;nbsp; If not, which one would run first?&lt;BR /&gt;&lt;BR /&gt;Also, you give me a great idea.&amp;nbsp; I might be able to nest multiple macros to solve this in a way that can be used in other cases.&amp;nbsp;&lt;span class="lia-unicode-emoji" title=":beaming_face_with_smiling_eyes:"&gt;😁&lt;/span&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;</description>
    <pubDate>Thu, 06 Jan 2022 15:36:41 GMT</pubDate>
    <dc:creator>cyberdiver</dc:creator>
    <dc:date>2022-01-06T15:36:41Z</dc:date>
    <item>
      <title>Break up search over large timeframe into searches of smaller timeframes</title>
      <link>https://community.splunk.com/t5/Splunk-Search/Break-up-search-over-large-timeframe-into-searches-of-smaller/m-p/580048#M202112</link>
      <description>&lt;H6&gt;TLDR:&amp;nbsp;&lt;STRONG&gt;I'm trying to automate the large 25 day search to break up into 25 separate one day searches.&lt;/STRONG&gt;&lt;/H6&gt;&lt;P class="lia-indent-padding-left-30px"&gt;I'm updating a&amp;nbsp;&lt;STRONG&gt;lookup table&lt;/STRONG&gt; that is tracking which indexes are affected by the new&amp;nbsp;&lt;STRONG&gt;log4j&lt;/STRONG&gt; exploit.&amp;nbsp; I do this so that I can only have to search through the affected indexes with subsequent searches.&amp;nbsp; This lookup table takes hours each time it is updated for a day.&amp;nbsp;&amp;nbsp;&lt;STRONG&gt;Problem being,&amp;nbsp;&lt;/STRONG&gt;I need to know all of the affected indexes over all of the days log4j since December 10th or so.&amp;nbsp;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Query that updates lookup table:&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;NOT [| inputlookup log4j_indexes.csv | fields index]
| regex _raw="(\$|%24)(\{|%7B)([^jJ]*[jJ])([^nN]*[nN])([^dD]*[dD])([^iI]*[iI])(:|%3A|\$|%24|}|%7D)"
| table index
| inputlookup append=true log4j_indexes.csv
| dedup index
| outputlookup log4j_indexes.csv​&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P class="lia-indent-padding-left-30px"&gt;Each time this query finishes, it appends log4j-exploit-affected indexes to the lookup table.&amp;nbsp; I need to automate the scanning over a large timeframe (December 10th 2021 - January 5th 2022).&amp;nbsp; However, I want the lookup table to update as it runs over each day.&amp;nbsp;&amp;nbsp;&lt;STRONG&gt;I'm trying to automate the large 25 day search to break up into 25 separate one day searches.&lt;/STRONG&gt;&amp;nbsp; This also makes it so that if the search fails, then I don't lose all progress.&amp;nbsp; I can then apply this same methodology to other searches.&lt;/P&gt;&lt;H3&gt;Lookup Table (Log4J_affected_indexes)&lt;/H3&gt;&lt;TABLE border="1" width="100%"&gt;&lt;TBODY&gt;&lt;TR&gt;&lt;TD width="100%"&gt;&lt;STRONG&gt;Index&lt;/STRONG&gt;&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TD width="100%"&gt;index_1&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TD width="100%"&gt;index_2&lt;/TD&gt;&lt;/TR&gt;&lt;/TBODY&gt;&lt;/TABLE&gt;&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;&lt;H2&gt;How I've tried to solve the problem&lt;/H2&gt;&lt;UL&gt;&lt;LI&gt;Commands I've tried while attempting to solve:&lt;UL&gt;&lt;LI&gt;&lt;SPAN&gt;foreach&lt;/SPAN&gt;&lt;/LI&gt;&lt;LI&gt;&lt;SPAN&gt;map&lt;/SPAN&gt;&lt;/LI&gt;&lt;LI&gt;&lt;SPAN&gt;gentimes&lt;/SPAN&gt;&lt;/LI&gt;&lt;LI&gt;&lt;SPAN&gt;subsearch&lt;/SPAN&gt;&lt;/LI&gt;&lt;LI&gt;&lt;SPAN&gt;saved searches&lt;/SPAN&gt;&lt;/LI&gt;&lt;/UL&gt;&lt;/LI&gt;&lt;/UL&gt;&lt;H3&gt;Gentimes (smaller timeframes) -&amp;gt; map&lt;/H3&gt;&lt;P&gt;&lt;U&gt;Explanation of Query below:&lt;/U&gt;&lt;/P&gt;&lt;P class="lia-indent-padding-left-30px"&gt;The&amp;nbsp;&lt;STRONG&gt;gentimes&amp;nbsp;&lt;/STRONG&gt;part creates a table based on the selected timerange:&lt;/P&gt;&lt;TABLE border="1" width="99.84399375975039%"&gt;&lt;TBODY&gt;&lt;TR&gt;&lt;TD&gt;&lt;STRONG&gt;Earliest&amp;nbsp;&lt;/STRONG&gt;&lt;/TD&gt;&lt;TD&gt;&lt;STRONG&gt;Latest&lt;/STRONG&gt;&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TD width="49.921996879875195%"&gt;01/02/2022:00:00:00&lt;/TD&gt;&lt;TD width="49.921996879875195%"&gt;01/03/2022:00:00:00&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TD width="49.921996879875195%"&gt;01/03/2022:00:00:00&lt;/TD&gt;&lt;TD width="49.921996879875195%"&gt;01/04/2022:00:00:00&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TD width="49.921996879875195%"&gt;01/04/2022:00:00:00&lt;/TD&gt;&lt;TD width="49.921996879875195%"&gt;01/05/2022:00:00:00&lt;/TD&gt;&lt;/TR&gt;&lt;/TBODY&gt;&lt;/TABLE&gt;&lt;P class="lia-indent-padding-left-30px"&gt;&amp;nbsp;&lt;/P&gt;&lt;P class="lia-indent-padding-left-30px"&gt;I try to pass those values to a subsearch as the&amp;nbsp;&lt;STRONG&gt;earliest&amp;nbsp;&lt;/STRONG&gt;and&amp;nbsp;&lt;STRONG&gt;latest&amp;nbsp;&lt;/STRONG&gt;parameters using&amp;nbsp;&lt;STRONG&gt;map.&lt;/STRONG&gt;&amp;nbsp; I understand now that map doesn't seem to work for this, and I get no results when the search runs.&lt;/P&gt;&lt;H4&gt;&lt;STRONG&gt;(gentimes and map) Query:&lt;/STRONG&gt;&lt;/H4&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;|gentimes start=-1
		 |addinfo
		 |eval datetime=strftime(mvrange(info_min_time,info_max_time,"1d"),"%m/%d/%Y:%H:%M:%S")
		 |mvexpand datetime
		 |fields datetime
		 |eval latest=datetime
		 |eval input_earliest=strptime(datetime, "%m/%d/%Y:%H:%M:%S") - 86400
		 |eval earliest=strftime(input_earliest, "%m/%d/%Y:%H:%M:%S")
		 |fields earliest, latest
| map search="search NOT [| inputlookup log4j_indexes.csv | fields index] earliest=$earliest$ latest=$latest$
| regex _raw=\"(\$|\%24)(\{|\%7B)([^jJ]*[jJ])([^nN]*[nN])([^dD]*[dD])([^iI]*[iI])(:|\%3A|\$|\%24|}|\%7D)\"
| table index
| inputlookup append=true log4j_indexes.csv
| dedup index
| outputlookup log4j_indexes.csv"&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;H3&gt;Gentimes subsearch -&amp;gt; main search&lt;/H3&gt;&lt;P&gt;&lt;U&gt;Explanation of Query below:&lt;/U&gt;&lt;/P&gt;&lt;P class="lia-indent-padding-left-30px"&gt;I use &lt;STRONG&gt;gentimes&lt;/STRONG&gt; in a subsearch to produce smaller timeframes from the larger selected timeframe:&lt;/P&gt;&lt;TABLE border="1" width="340px"&gt;&lt;TBODY&gt;&lt;TR&gt;&lt;TD width="170px"&gt;&lt;STRONG&gt;Earliest&amp;nbsp;&lt;/STRONG&gt;&lt;/TD&gt;&lt;TD width="170px"&gt;&lt;STRONG&gt;Latest&lt;/STRONG&gt;&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TD width="170px"&gt;01/02/2022:00:00:00&lt;/TD&gt;&lt;TD width="170px"&gt;01/03/2022:00:00:00&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TD width="170px"&gt;01/03/2022:00:00:00&lt;/TD&gt;&lt;TD width="170px"&gt;01/04/2022:00:00:00&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TD width="170px"&gt;01/04/2022:00:00:00&lt;/TD&gt;&lt;TD width="170px"&gt;01/05/2022:00:00:00&lt;/TD&gt;&lt;/TR&gt;&lt;/TBODY&gt;&lt;/TABLE&gt;&lt;P class="lia-indent-padding-left-30px"&gt;&amp;nbsp;&lt;/P&gt;&lt;P class="lia-indent-padding-left-30px"&gt;This doesn't give me errors.&amp;nbsp; However, I get no matches.&amp;nbsp; I can almost guarantee this isn't running separate searches per value displayed in the above table.&amp;nbsp; I'm not sure how this can be done.&lt;/P&gt;&lt;H4&gt;&lt;STRONG&gt;(gentimes subsearch)&lt;/STRONG&gt; Query:&lt;/H4&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;NOT [| inputlookup log4j_indexes.csv | fields index]
[|gentimes start=-1
	|addinfo
	|eval datetime=strftime(mvrange(info_min_time,info_max_time,"1d"), 
         "%m/%d/%Y:%H:%M:%S")
	|mvexpand datetime
	|fields datetime
	|eval latest=datetime
	|eval input_earliest=strptime(datetime,"%m/%d/%Y:%H:%M:%S") - 86400
	|eval earliest=strftime(input_earliest,"%m/%d/%Y:%H:%M:%S")
	|fields earliest, latest]
| regex _raw="(\$|\%24)(\{|\%7B)([^jJ]*[jJ])([^nN]*[nN])([^dD]*[dD])([^iI]*[iI])(:|\%3A|\$|\%24|}|\%7D)"
| table index
| inputlookup append=true log4j_indexes.csv
| dedup index
| outputlookup log4j_indexes.csv&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;H2&gt;Conclusion&lt;/H2&gt;&lt;H4&gt;Other failed attempts:&lt;/H4&gt;&lt;UL&gt;&lt;LI&gt;using foreach (can't do non-streaming)&lt;/LI&gt;&lt;LI&gt;passing earliest and latest parameters to saved-search&lt;UL&gt;&lt;LI&gt;savedsearch doesn't work this way&lt;/LI&gt;&lt;/UL&gt;&lt;/LI&gt;&lt;/UL&gt;&lt;H4&gt;Other solutions I've thought of:&lt;/H4&gt;&lt;UL&gt;&lt;LI&gt;Running subsearch that updates a &lt;STRONG&gt;smaller_timeframe.csv file&lt;/STRONG&gt; that keeps track of the smaller timeframes.&amp;nbsp; Then, pass those timeframe parameters (earliest / latest) into a search somehow.&lt;/LI&gt;&lt;LI&gt;Somehow &lt;STRONG&gt;do a recursive sort of search&lt;/STRONG&gt; where each search triggers another search to go.&amp;nbsp; Consequently, I could have a search trigger another search with the earliest and latest values incremented forward one day (or any amount of time).&lt;/LI&gt;&lt;LI&gt;Maybe, Splunk has a feature (not on the search head) that can automate the same search over small timeframes, and over a large period of time.&amp;nbsp; Maybe this unknown-to-me feature also has scheduling built into it.&lt;/LI&gt;&lt;/UL&gt;&lt;H6&gt;If there is any other information that I can give to help others solve this with me, then just ask.&amp;nbsp; I can edit this post...&lt;/H6&gt;</description>
      <pubDate>Wed, 05 Jan 2022 20:47:51 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/Break-up-search-over-large-timeframe-into-searches-of-smaller/m-p/580048#M202112</guid>
      <dc:creator>cyberdiver</dc:creator>
      <dc:date>2022-01-05T20:47:51Z</dc:date>
    </item>
    <item>
      <title>Re: Break up search over large timeframe into searches of smaller timeframes</title>
      <link>https://community.splunk.com/t5/Splunk-Search/Break-up-search-over-large-timeframe-into-searches-of-smaller/m-p/580064#M202120</link>
      <description>&lt;UL&gt;&lt;LI&gt;There could be opportunities to use keyword/string search which are magnitudes faster.&lt;/LI&gt;&lt;LI&gt;Your regex could be&amp;nbsp;significantly improved to be more efficient.&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;Could you provide a few examples of the different iterations of the JDNI string that you are trying match against?&lt;/P&gt;</description>
      <pubDate>Wed, 05 Jan 2022 23:59:40 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/Break-up-search-over-large-timeframe-into-searches-of-smaller/m-p/580064#M202120</guid>
      <dc:creator>johnhuang</dc:creator>
      <dc:date>2022-01-05T23:59:40Z</dc:date>
    </item>
    <item>
      <title>Re: Break up search over large timeframe into searches of smaller timeframes</title>
      <link>https://community.splunk.com/t5/Splunk-Search/Break-up-search-over-large-timeframe-into-searches-of-smaller/m-p/580077#M202124</link>
      <description>&lt;P&gt;There are hundreds of different iterations (it seems), after using that regex.&amp;nbsp; The bigger issue is trying to update the 3 lookup tables I have right now (&lt;STRONG&gt;log4j_affected_indexes.csv&lt;/STRONG&gt;, &lt;STRONG&gt;log4j_affected sourcetypes.csv&lt;/STRONG&gt;, and &lt;STRONG&gt;log4j_malicious_sources.csv&lt;/STRONG&gt;).&amp;nbsp; Log4J started showing up around December 10th, so I need to log every malicious source that I can, and then query if there was outbound communication back to those sources.&amp;nbsp; This means having to query ~25 days of logs, and extracting the malicious sources from them.&amp;nbsp; Doing a full 25 day search probably wouldn't go well, so breaking it up automatically into 25 searches that each last about two hours means it would take ~3 days to run, and then I query again to look for outbound communications.&lt;BR /&gt;&lt;BR /&gt;I even wrote regex to extract those malicious sources (IP or URL) from the raw data:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;(\$|%24)(\{|%7B)([^jJ]*[jJ])([^nN]*[nN])([^dD]*[dD])([^iI]*[iI])(:|%3A|\$|%24|}|%7D)(?&amp;lt;Exploit&amp;gt;.*?)((\:|%3A)*?)(\/\/|%2F%2F)(((?&amp;lt;MaliciousSource_IP&amp;gt;(\d{1,3}(?:\.\d{1,3}){3}))(?:(.*?)))|(?&amp;lt;MaliciousSource_URL&amp;gt;((([\=\.\$\_\:\{\}]*?)|(%24)|(%7B)|(%7D))*?[\w\d\.]+?[\.\/\:\=]*?)+))((%7D|\}){1})&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 06 Jan 2022 03:46:51 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/Break-up-search-over-large-timeframe-into-searches-of-smaller/m-p/580077#M202124</guid>
      <dc:creator>cyberdiver</dc:creator>
      <dc:date>2022-01-06T03:46:51Z</dc:date>
    </item>
    <item>
      <title>Re: Break up search over large timeframe into searches of smaller timeframes</title>
      <link>https://community.splunk.com/t5/Splunk-Search/Break-up-search-over-large-timeframe-into-searches-of-smaller/m-p/580081#M202125</link>
      <description>&lt;P&gt;Simplify your searches, reconsider whether it's necessary to find every possible iteration of the jdni string and whether it's necessary to check every exploit attempt.&lt;/P&gt;&lt;P&gt;For example this "global" search took 4 seconds to complete against a 1TB/day Splunk environment.&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;("${jndi" OR "%24%7Bjndi") earliest=-7d@d
| rex "(?&amp;lt;jndi_string&amp;gt;(\$\{jndi[^\}]*\})|(%24%7Bjndi.*%7D))"
| eval jndi_string=urldecode(jndi_string)
| table _time index sourcetype jndi_string&lt;/LI-CODE&gt;&lt;P&gt;&lt;BR /&gt;If you have a NGEN firewall, look for LDAP or DNS outbound connections.&lt;/P&gt;</description>
      <pubDate>Thu, 06 Jan 2022 05:28:41 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/Break-up-search-over-large-timeframe-into-searches-of-smaller/m-p/580081#M202125</guid>
      <dc:creator>johnhuang</dc:creator>
      <dc:date>2022-01-06T05:28:41Z</dc:date>
    </item>
    <item>
      <title>Re: Break up search over large timeframe into searches of smaller timeframes</title>
      <link>https://community.splunk.com/t5/Splunk-Search/Break-up-search-over-large-timeframe-into-searches-of-smaller/m-p/580106#M202137</link>
      <description>&lt;P&gt;You could try something like this:&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;NOT 
    [| inputlookup indexes.csv 
    | table index] 
    [| makeresults 
    | addinfo 
    | eval latest=relative_time(info_max_time,"@d") 
    | eval earliest=latest-(24*60*60) 
    | table earliest latest] 
| table index 
| inputlookup append=true indexes.csv 
| dedup index 
| outputlookup indexes.csv 
| where index="1" 
| append 
    [ search NOT 
        [| inputlookup indexes.csv 
        | table index] 
        [| makeresults 
        | addinfo 
        | eval latest=relative_time(info_max_time,"-1d@d") 
        | eval earliest=latest-(24*60*60) 
        | table earliest latest] 
    | table index 
    | inputlookup append=true indexes.csv 
    | dedup index 
    | outputlookup indexes.csv 
    | where index="1"] 
| append 
    [ search NOT 
        [| inputlookup indexes.csv 
        | table index] 
        [| makeresults 
        | addinfo 
        | eval latest=relative_time(info_max_time,"-2d@d") 
        | eval earliest=latest-(24*60*60) 
        | table earliest latest] 
    | table index 
    | inputlookup append=true indexes.csv 
    | dedup index 
    | outputlookup indexes.csv 
    | where index="1"] 
| append 
    [ search NOT 
        [| inputlookup indexes.csv 
        | table index] 
        [| makeresults 
        | addinfo 
        | eval latest=relative_time(info_max_time,"-3d@d") 
        | eval earliest=latest-(24*60*60) 
        | table earliest latest] 
    | table index 
    | inputlookup append=true indexes.csv 
    | dedup index 
    | outputlookup indexes.csv 
    | where index="1"] 
| append 
    [ search NOT 
        [| inputlookup indexes.csv 
        | table index] 
        [| makeresults 
        | addinfo 
        | eval latest=relative_time(info_max_time,"-4d@d") 
        | eval earliest=latest-(24*60*60) 
        | table earliest latest] 
    | table index 
    | inputlookup append=true indexes.csv 
    | dedup index 
    | outputlookup indexes.csv 
    | where index="1"] 
| append 
    [ search NOT 
        [| inputlookup indexes.csv 
        | table index] 
        [| makeresults 
        | addinfo 
        | eval latest=relative_time(info_max_time,"-5d@d") 
        | eval earliest=latest-(24*60*60) 
        | table earliest latest] 
    | table index 
    | inputlookup append=true indexes.csv 
    | dedup index 
    | outputlookup indexes.csv 
    | where index="1"] 
| append 
    [ search NOT 
        [| inputlookup indexes.csv 
        | table index] 
        [| makeresults 
        | addinfo 
        | eval latest=relative_time(info_max_time,"-6d@d") 
        | eval earliest=latest-(24*60*60) 
        | table earliest latest] 
    | table index 
    | inputlookup append=true indexes.csv 
    | dedup index 
    | outputlookup indexes.csv 
    | where index="1"] 
| append 
    [ search NOT 
        [| inputlookup indexes.csv 
        | table index] 
        [| makeresults 
        | addinfo 
        | eval latest=relative_time(info_max_time,"-7d@d") 
        | eval earliest=latest-(24*60*60) 
        | table earliest latest] 
    | table index 
    | inputlookup append=true indexes.csv 
    | dedup index 
    | outputlookup indexes.csv 
    | where index="1"] 
| append 
    [| inputlookup indexes.csv]&lt;/LI-CODE&gt;&lt;P&gt;Essentially, this goes back 8 days, one day at a time. You can modify the latest and earliest calculation to suit your requirement.&lt;/P&gt;&lt;P&gt;If you create a macro for the append parts, e.g. index_checker(1) with argument day&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;| append
    [search NOT 
    [| inputlookup indexes.csv 
        | table index]
    [| makeresults
    | addinfo
    | eval latest=relative_time(info_max_time,"-$day$d@d")
    | eval earliest=latest-(24*60*60)
    | table earliest latest]
    | table index
    | inputlookup append=true indexes.csv
    | dedup index
    | outputlookup indexes.csv
    | where index="1"]&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;The search can be reduced to this:&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;NOT 
    [| inputlookup indexes.csv 
        | table index]
    [| makeresults
    | addinfo
    | eval latest=relative_time(info_max_time,"@d")
    | eval earliest=latest-(24*60*60)
    | table earliest latest]
| table index
| inputlookup append=true indexes.csv
| dedup index
| outputlookup indexes.csv
| where index="1"
`index_checker(1)`
`index_checker(2)`
`index_checker(3)`
`index_checker(4)`
`index_checker(5)`
`index_checker(6)`
`index_checker(7)`
| append
    [| inputlookup indexes.csv]&lt;/LI-CODE&gt;</description>
      <pubDate>Thu, 06 Jan 2022 10:23:37 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/Break-up-search-over-large-timeframe-into-searches-of-smaller/m-p/580106#M202137</guid>
      <dc:creator>ITWhisperer</dc:creator>
      <dc:date>2022-01-06T10:23:37Z</dc:date>
    </item>
    <item>
      <title>Re: Break up search over large timeframe into searches of smaller timeframes</title>
      <link>https://community.splunk.com/t5/Splunk-Search/Break-up-search-over-large-timeframe-into-searches-of-smaller/m-p/580136#M202146</link>
      <description>&lt;P&gt;Does this mean that each day would run separately and in a certain order?&amp;nbsp; Do they run at the same time?&amp;nbsp; If not, which one would run first?&lt;BR /&gt;&lt;BR /&gt;Also, you give me a great idea.&amp;nbsp; I might be able to nest multiple macros to solve this in a way that can be used in other cases.&amp;nbsp;&lt;span class="lia-unicode-emoji" title=":beaming_face_with_smiling_eyes:"&gt;😁&lt;/span&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 06 Jan 2022 15:36:41 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/Break-up-search-over-large-timeframe-into-searches-of-smaller/m-p/580136#M202146</guid>
      <dc:creator>cyberdiver</dc:creator>
      <dc:date>2022-01-06T15:36:41Z</dc:date>
    </item>
    <item>
      <title>Re: Break up search over large timeframe into searches of smaller timeframes</title>
      <link>https://community.splunk.com/t5/Splunk-Search/Break-up-search-over-large-timeframe-into-searches-of-smaller/m-p/580166#M202153</link>
      <description>&lt;P&gt;The solution to this "breaking up large timeframe into smaller timeframes" uses macros.&amp;nbsp; Macro and the search are listed below:&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;&lt;H2&gt;Search Query over 7 days (split up into 7 searches)&lt;/H2&gt;&lt;P&gt;- each of the macros updates the lookup table and also checks only indexes that haven't been checked for log4j.&amp;nbsp; Therefore, it speeds up as more indexes are searched.&amp;nbsp; The next step of this process is to use the same type of macro to narrow down affected sourcetypes.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;NOT [| inputlookup log4j_indexes.csv | table index]
[| makeresults
    | addinfo
    | eval latest=relative_time(info_max_time,"@d")
    | eval earliest=latest-(24*60*60)
    | eval earliest=strftime(earliest, "%m/%d/%Y:%H:%M:%S")
    | eval latest=strftime(latest, "%m/%d/%Y:%H:%M:%S")
    | table earliest latest]
| regex _raw="(\$|%24)(\{|%7B)([^jJ]*[jJ])([^nN]*[nN])([^dD]*[dD])([^iI]*[iI])(:|%3A|\$|%24|}|%7D)"
| table index
| inputlookup append=true log4j_indexes.csv
| dedup index
| outputlookup log4j_indexes.csv
| `lookup_updater_ultra(lookup_table="log4j_indexes.csv",lookup_field="index",day=1,search_command="regex _raw=\"(\$|%24)(\{|%7B)([^jJ]*[jJ])([^nN]*[nN])([^dD]*[dD])([^iI]*[iI])(:|%3A|\$|%24|}|%7D)\"")`
| `lookup_updater_ultra(lookup_table="log4j_indexes.csv",lookup_field="index",day=2,search_command="regex _raw=\"(\$|%24)(\{|%7B)([^jJ]*[jJ])([^nN]*[nN])([^dD]*[dD])([^iI]*[iI])(:|%3A|\$|%24|}|%7D)\"")`
| `lookup_updater_ultra(lookup_table="log4j_indexes.csv",lookup_field="index",day=3,search_command="regex _raw=\"(\$|%24)(\{|%7B)([^jJ]*[jJ])([^nN]*[nN])([^dD]*[dD])([^iI]*[iI])(:|%3A|\$|%24|}|%7D)\"")`
| `lookup_updater_ultra(lookup_table="log4j_indexes.csv",lookup_field="index",day=4,search_command="regex _raw=\"(\$|%24)(\{|%7B)([^jJ]*[jJ])([^nN]*[nN])([^dD]*[dD])([^iI]*[iI])(:|%3A|\$|%24|}|%7D)\"")`
| `lookup_updater_ultra(lookup_table="log4j_indexes.csv",lookup_field="index",day=5,search_command="regex _raw=\"(\$|%24)(\{|%7B)([^jJ]*[jJ])([^nN]*[nN])([^dD]*[dD])([^iI]*[iI])(:|%3A|\$|%24|}|%7D)\"")`
| `lookup_updater_ultra(lookup_table="log4j_indexes.csv",lookup_field="index",day=6,search_command="regex _raw=\"(\$|%24)(\{|%7B)([^jJ]*[jJ])([^nN]*[nN])([^dD]*[dD])([^iI]*[iI])(:|%3A|\$|%24|}|%7D)\"")`
| `lookup_updater_ultra(lookup_table="log4j_indexes.csv",lookup_field="index",day=7,search_command="regex _raw=\"(\$|%24)(\{|%7B)([^jJ]*[jJ])([^nN]*[nN])([^dD]*[dD])([^iI]*[iI])(:|%3A|\$|%24|}|%7D)\"")`
| append [| inputlookup log4j_indexes.csv]&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;H2&gt;&amp;nbsp;Lookup Table Updater Macro&lt;/H2&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;append
[search NOT [| inputlookup $lookup_table$ | table $lookup_field$]
	[| makeresults
	    | addinfo
	    | eval latest=relative_time(info_max_time,"-$day$d@d")
	    | eval earliest=latest-(24*60*60)
            | eval earliest=strftime(earliest, "%m/%d/%Y:%H:%M:%S")
            | eval latest=strftime(latest, "%m/%d/%Y:%H:%M:%S")                      
	    | table earliest latest]
| $search_command$
| inputlookup append=true $lookup_table$
| dedup $lookup_field$
| outputlookup $lookup_table$]&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;Possible Improvements:&lt;UL&gt;&lt;LI&gt;Recursive macros&amp;nbsp;&lt;UL&gt;&lt;LI&gt;You could have an if statement within the macro that sort of acts like a for loop and recursively calls the next day down.&lt;/LI&gt;&lt;LI&gt;Example:&amp;nbsp; You are searching 7 days so put 7 as an argument to the macro.&amp;nbsp; It then checks using eval - "IF day is more than 1 then pass that same macro with current_day-1 as the day argument - ELSE pass 1".&lt;/LI&gt;&lt;/UL&gt;&lt;/LI&gt;&lt;/UL&gt;&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 06 Jan 2022 20:02:20 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/Break-up-search-over-large-timeframe-into-searches-of-smaller/m-p/580166#M202153</guid>
      <dc:creator>cyberdiver</dc:creator>
      <dc:date>2022-01-06T20:02:20Z</dc:date>
    </item>
  </channel>
</rss>

