Here is another technique to do fuzzy matching on multi-row outputs but without using a temp file, to avoid same-file-use for instances where many searches may call the code at once. We are using versions of this for some RBA enrichment macros. Thanks @japger_splunk for a tip on multireport and @woodcock for the map example!
| makeresults
| eval src="1.1.1.1;2.2.2.2;3.3.3.3;4.4.4.4;5.5.5.5"
| makemv src delim=";"
| mvexpand src `comment("
BEGIN ENRICHMENT BLOCK")`
`comment("REMEMBER ROW ORDER AND MARK AS ORIGINAL DATA")`
| eval original_row=1
| streamstats count AS marker
`comment("FORK THE SEARCH: FIRST TO PRESERVE RESULTS, SECOND TO COMPARE EACH ROW AGAINST LOOKUP")`
| multireport
[ ]
`comment("FOR EACH ROW, RUN FUZZY MATCH AGAINST LOOKUP AND SUMMARIZE THE RESULTS")`
[| map maxsearches=99999 search="
| inputlookup notable_cache
| eval marker=$marker$, src=$src$
| eval match=if(like(raw,\"%\".src.\"%\"), 1, 0)
| where match==1
| eval age_days = (now()-info_search_time)/86400
| eval in_notable_7d=if(age_days<=7,1,0), in_notable_30d=if(age_days<=30,1,0)
| stats values(marker) AS marker, sum(in_notable_7d) AS in_notable_7d_count, sum(in_notable_30d) AS in_notable_30d_count BY src
"]
`comment("INTERLEAVE THE ORIGINAL RESULTS WITH THE LOOKUP MATCH RESULTS")`
| sort 0 marker, in_notable_30d_count, in_notable_7d_count
`comment("TRANSPOSE DATA FROM ABOVE")`
| streamstats current=f window=1 last(in_notable_30d_count) AS prev_in_notable_30d_count, last(in_notable_7d_count) AS prev_in_notable_7d_count
`comment("GET RID OF THE LOOKUP RESULTS")`
| where original_row==1
`comment("CLEAN UP THE DATA")`
| rename prev_in_notable_30d_count AS in_notable_30d_count, prev_in_notable_7d_count AS in_notable_7d_count
| fillnull value=0 in_notable_30d_count, in_notable_7d_count
| fields - original_row, marker
... View more