Splunk Search

Subsearches compairing datasets

adamsmith47
Communicator

Hello all,

I have a search technique I've been using to compare smaller sets of data, to find the difference, however I'm running into the subsearch limit with a new set of data. I'm hoping someone has a good idea for a different way to perform the search that doesn't run into subsearch limits. Here's the situation:

Each night a system is dumping an *.csv log into a directory which Splunk is monitoring and indexing. The csv is approximately 50k lines, therefor approx 50k events indexed by Splunk. I'm being asked to report each morning on events that exist in today's dump, which didn't exist in the previous day's dump. I've gone to my typical routine below in an attempt to accomplish this, but I'm hitting that 10k subsearch limit. I'm assuming I could up the limit, but, I'd rather have a more efficient search, if possible.

| set union
[search index=<index> sourcetype=<sourcetype> earliest=@d-1d latest=@d | eval daysago=1 | stats count by <field1> <field2> <field3> daysago | fields - count]
[search index=<index> sourcetype=<sourcetype> earliest=@d latest=@d+1d | eval daysago=0 | stats count by <field1> <field2> <field3> daysago | fields - count]
| stats max(daysago) as daysago by <field1> <field2> <field3> | where daysago=0
| eval Details="Has been added in the past day."
| table Details <field1> <field2> <field3>

I know the logic is sound (I use it for other things), but here the subsearches are just too big.

Any advice is welcome! Thank you.

0 Karma
1 Solution

lguinn2
Legend

You can only up the limit to 10,499 so that isn't going to help. The following technique has no limits and will run much faster:

search index=<index> sourcetype=<sourcetype> earliest=-1d@d
| eval daysago=if(_time>reltime(now,"@d"),daysago=0,daysago=1)
| stats count by <field1> <field2> <field3> daysago 
| fields - count
| stats max(daysago) as daysago by <field1> <field2> <field3> 
| where daysago=0
| eval Details="Has been added in the past day."
| table Details <field1> <field2> <field3>

This technique searches the data set only once, then categorizes the results before comparing them.

View solution in original post

DalJeanis
Legend

How about this -

 index=<index> sourcetype=<sourcetype> earliest=-1d@d
| bin _time span=1d
| stats min(_time) as mintime max(_time) as maxtime by <field1> <field2> <field3>
| eventstats min(mintime) as yesterdayepoch max(maxtime) as todayepoch
| where mintime=maxtime
| eval myflag=case(mintime==todayepoch,"Added Record",maxtime==yesterdayepoch,"Deleted Record", true(),"Nonesuch Record")  
| eval _time = mintime 
| table _time <field1> <field2> <field3> myflag

updated case tests to use == rather than =

0 Karma

lguinn2
Legend

You can only up the limit to 10,499 so that isn't going to help. The following technique has no limits and will run much faster:

search index=<index> sourcetype=<sourcetype> earliest=-1d@d
| eval daysago=if(_time>reltime(now,"@d"),daysago=0,daysago=1)
| stats count by <field1> <field2> <field3> daysago 
| fields - count
| stats max(daysago) as daysago by <field1> <field2> <field3> 
| where daysago=0
| eval Details="Has been added in the past day."
| table Details <field1> <field2> <field3>

This technique searches the data set only once, then categorizes the results before comparing them.

somesoni2
Revered Legend

I believe we can eliminate first stats altogether (| stats count...). Also, he earliest for 0 daysago i.e. @d is inclusive of events exactly at @d, the comparison operator for _time>relative_time(now(),"@d") (there is a typo in the relative_time) should be >=.

adamsmith47
Communicator

Thank you Ignuinn and somesoni, it's working well!

The form I've ultimately gone with is:

index=<index> sourcetype=<sourcetype> earliest=-1d@d
| eval daysago=if(_time>=relative_time(now(),"@d"),0,1)
| stats max(daysago) as daysago by <field1> <field2> <field3>
| where daysago=0
| eval Details="Has been added in the past day."
| table Details <field1> <field2> <field3>
0 Karma
Got questions? Get answers!

Join the Splunk Community Slack to learn, troubleshoot, and make connections with fellow Splunk practitioners in real time!

Meet up IRL or virtually!

Join Splunk User Groups to connect and learn in-person by region or remotely by topic or industry.

Get Updates on the Splunk Community!

Index This | What travels the world but is also stuck in place?

April 2026 Edition  Hayyy Splunk Education Enthusiasts and the Eternally Curious!   We’re back with this ...

Discover New Use Cases: Unlock Greater Value from Your Existing Splunk Data

Realizing the full potential of your Splunk investment requires more than just understanding current usage; it ...

Continue Your Journey: Join Session 2 of the Data Management and Federation Bootcamp ...

As data volumes continue to grow and environments become more distributed, managing and optimizing data ...