Splunk Search

Comparing results from two different dates vertically

mark_groenveld
Path Finder

I would like to compare specific response status stats vertically and not horizontally so that the values line up and do not rely on the appendcols command.

My search:

| multisearch [search  NOT status IN ( 200, 203, 204, 302, 201, 202, 206, 301, 304, 404, 500, 400, 401, 403, 502, 504 )  earliest=-4h@m latest=@m | eval date="Today"] [search NOT status IN (200, 203, 204, 302, 201, 202, 206, 301, 304, 404, 500, 400, 401, 403, 502, 504 )  earliest=-4h@m-1w latest=@m-1w | eval date="LastWeek"] | timechart span=1d count by status

Example display of current results

mark_groenveld_0-1727461713960.png

Desired results

Status Today LastWeek
412 1 0
413 1 0
415 0 1
418 0 2
422 6 7
Labels (1)
0 Karma

PickleRick
SplunkTrust
SplunkTrust

Firstly, some general remarks about your search. You don't have to use multisearch.

1. You can specify two time ranges within a single search

 NOT status IN ( 200, 203, 204, 302, 201, 202, 206, 301, 304, 404, 500, 400, 401, 403, 502, 504 ) (earliest=-4h@m latest=@m) OR (earliest=-1w-4h@m latest=-1w@m)

and clasify by time difference

| eval date=if(now()-_time<20000,"today","last week")

2. Inclusion is better than exclusion. I understand that sometimes the exclusion might be the only way to specify the search conditions but if you can just use (or at least add) something like

status IN (412, 403, 40* ...)

Do so. It will be quicker than having to parse every single event. Even if you do have

status IN (10*, 21*, 4*) NOT status IN (200,203,204 ... )

it should be faster than pure exclusion

3. If you have your data clasiffied (each event has that additional field saying "today" or "last week"), you can do

| stats count by status date

which will give you almost what you want.

The data is already there, now you need to display it properly

| xyseries status date count

or

| xyseries date status count

depending on which way you want it presented.

 

You can do it also completely another way.

Do your initial search

status IN (something) NOT status IN something (earliest=long_ago latest=long_ago) OR (earliest=lately latest=lately)

Do your timechart

| timechart span=1d count by status

But now you have many unneeded days in the middle (actually your multisearch will _not_ yield that result - it will be limited to just two days so you can do the multisearch and skip the following steps)

Now we need to filter out the days for which we have no results at all

| eval filter="y"
| foreach *
[ eval filter=if('<<FIELD>>'!=0,"n",filter) ]
| where filter="n"
| fields - filter

 OK. So now you have the data (as you had at the very beginning of this thread 😉

But it's transposed.

So what can you do? Surprise, surprise...

| transpose 0 header_field=_time

Now your data is other way around.

But you'll notice that two fields showed up now as "values" - _span and _spandays - they are hidden internal Splunk's fields which should be filtered out before the transpose command with

| fields - _span _spandays

And of course the timestamp was an epoch-based unix timestamp so you got some integer after transposing. You need to use strftime or any other method to format the _time to some human-readable format before transposing.

 

isoutamo
SplunkTrust
SplunkTrust
You should also look command timewrap which can help you with this kind of comparisions.
0 Karma
Career Survey
First 500 qualified respondents will receive a $20 gift card! Tell us about your professional Splunk journey.

Can’t make it to .conf25? Join us online!

Get Updates on the Splunk Community!

Leveraging Automated Threat Analysis Across the Splunk Ecosystem

Are you leveraging automation to its fullest potential in your threat detection strategy?Our upcoming Security ...

Can’t Make It to Boston? Stream .conf25 and Learn with Haya Husain

Boston may be buzzing this September with Splunk University and .conf25, but you don’t have to pack a bag to ...

Splunk Lantern’s Guide to The Most Popular .conf25 Sessions

Splunk Lantern is a Splunk customer success center that provides advice from Splunk experts on valuable data ...