Splunk Search

Subsearch for negation

satishsdange
Builder

I am trying to write a subsearch which will negate few days/hours from results considering event count. But below search is not giving me desired result. Could someone please point me the right search?

index=_internal sourcetype="splunkd" NOT [search index=_internal sourcetype="splunkd" | timechart count | search count > 10] | timechart count
Tags (1)
0 Karma
1 Solution

DalJeanis
SplunkTrust
SplunkTrust

I have no idea why you are trying to do what you are trying to do, but it really won't work that way.

First, your subsearch is scanning the _internal index for that sourcetype, then creating a table that contains only _time and count. The field _time that is returned by timechart will have been binned into buckets of whatever the default duration is for timechart, based on the time period that you are running the search over, so very few transactions would possibly match the exact _time values returned. The field count does not even exist on the _internal index, so nothing ever will be returned.

Then even if that worked, you are then asking splunk to go back and add it all up again... getting no more data than the first time, and no different answer.

Now, understand, anything CAN be done. We could code a way to take the results of a subsearch and collect the data off the _internal index (preferably using tstats rather than a full scan) and use that to populate the search with time inclusion criteria. For some reason, tstats refuses to work correctly with complex _time tests, so you'd feed it back to your original search like this...

index=_internal sourcetype="splunkd" 
    [
    | tstats count as spancount where index=_internal AND sourcetype="splunkd" BY _time span=5m 
    | eval spanstart=_time, spanend=_time+300
    | eval highlow=if(spancount>10000,"high","low")
    | streamstats current=f last(highlow) as prevhighlow
    | eval spanchange=if(highlow=coalesce(prevhighlow,"none"),0,1)
    | streamstats sum(spanchange) as spannumber
    | stats sum(spancount) as spancount count as periods min(spanstart) as spanstart, max(spanend) as spanend, min(highlow) as highlow by spannumber
    | where highlow="low"
    | table spanstart spanend
    | format
    | rex mode=sed field=search "s/spanend=\"/_time</g"
    | rex mode=sed field=search "s/spanstart=\"/_time>=/g"
    | rex mode=sed field=search "s/\"//g"
    | table search
    ]
| bin _time span=5m 
| stats count by _time

All that mumbo-jumbo between | table spanstart spanend and | table search is turning our paired _time values into a search format that reads like...

( ( _time<1494608100 AND _time>=1494607800 ) OR ( _time<1494609000 AND _time>=1494608400 )  ... )

Note that in my highlow test I I used 10000 rather than 10, due to the traffic on my index.

...and after all that work, it gets the exact same results as somesoni2's

 index=_internal sourcetype="splunkd" | timechart span=5m count | WHERE count <= 10000

View solution in original post

0 Karma

DalJeanis
SplunkTrust
SplunkTrust

I have no idea why you are trying to do what you are trying to do, but it really won't work that way.

First, your subsearch is scanning the _internal index for that sourcetype, then creating a table that contains only _time and count. The field _time that is returned by timechart will have been binned into buckets of whatever the default duration is for timechart, based on the time period that you are running the search over, so very few transactions would possibly match the exact _time values returned. The field count does not even exist on the _internal index, so nothing ever will be returned.

Then even if that worked, you are then asking splunk to go back and add it all up again... getting no more data than the first time, and no different answer.

Now, understand, anything CAN be done. We could code a way to take the results of a subsearch and collect the data off the _internal index (preferably using tstats rather than a full scan) and use that to populate the search with time inclusion criteria. For some reason, tstats refuses to work correctly with complex _time tests, so you'd feed it back to your original search like this...

index=_internal sourcetype="splunkd" 
    [
    | tstats count as spancount where index=_internal AND sourcetype="splunkd" BY _time span=5m 
    | eval spanstart=_time, spanend=_time+300
    | eval highlow=if(spancount>10000,"high","low")
    | streamstats current=f last(highlow) as prevhighlow
    | eval spanchange=if(highlow=coalesce(prevhighlow,"none"),0,1)
    | streamstats sum(spanchange) as spannumber
    | stats sum(spancount) as spancount count as periods min(spanstart) as spanstart, max(spanend) as spanend, min(highlow) as highlow by spannumber
    | where highlow="low"
    | table spanstart spanend
    | format
    | rex mode=sed field=search "s/spanend=\"/_time</g"
    | rex mode=sed field=search "s/spanstart=\"/_time>=/g"
    | rex mode=sed field=search "s/\"//g"
    | table search
    ]
| bin _time span=5m 
| stats count by _time

All that mumbo-jumbo between | table spanstart spanend and | table search is turning our paired _time values into a search format that reads like...

( ( _time<1494608100 AND _time>=1494607800 ) OR ( _time<1494609000 AND _time>=1494608400 )  ... )

Note that in my highlow test I I used 10000 rather than 10, due to the traffic on my index.

...and after all that work, it gets the exact same results as somesoni2's

 index=_internal sourcetype="splunkd" | timechart span=5m count | WHERE count <= 10000
0 Karma

somesoni2
SplunkTrust
SplunkTrust

Why not just run this

index=_internal sourcetype="splunkd" | timechart count | WHERE count <= 10

satishsdange
Builder

Thanks. But i want to use subsearch only. Any suggestion?

0 Karma

ckunath
Communicator

Doing it the way somesoni suggested is much better - why do you insist on using subsearch if I may ask?

0 Karma

somesoni2
SplunkTrust
SplunkTrust

Since the timechart will use some default span based on time range (as you've not specified anywhere), using subsearch will be a complex solution. If your requirement is met using just a single-simple search, any specific reason you want to go with subsearch (you'll be running the same search twice)?

0 Karma
Get Updates on the Splunk Community!

What's new in Splunk Cloud Platform 9.1.2312?

Hi Splunky people! We are excited to share the newest updates in Splunk Cloud Platform 9.1.2312! Analysts can ...

What’s New in Splunk Security Essentials 3.8.0?

Splunk Security Essentials (SSE) is an app that can amplify the power of your existing Splunk Cloud Platform, ...

Let’s Get You Certified – Vegas-Style at .conf24

Are you ready to level up your Splunk game? Then, let’s get you certified live at .conf24 – our annual user ...