Getting Data In

Comparing counts for different time ranges

queryaslan
Explorer

Hi , when I'm deploying new changes to my services I want to compare the last day's error logs to the last week to see if there has been an increase for a specific message.  I have trouble figuring out how I display the counts for the different time ranges by message.

This kind of gives the correct result but the same message for last and this week will not be grouped correctly.

 

 

 

sourcetype="my pod" level="error" | eval marker = if (_time < relative_time(now(), "-1d@d"),
"lastweek", "thisweek") | multireport [where marker="thisweek" | stats count as this week by message] [where marker="lastweek" | stats count as last week by message]

 

 

 


Grateful for any help

Labels (1)
0 Karma
1 Solution

ITWhisperer
SplunkTrust
SplunkTrust
sourcetype="my pod" level="error" 
[| makeresults
| eval days=mvappend("0","7")
| mvexpand days
| eval earliest=relative_time(now(),"-".tostring(1+days)."d@d")
| eval latest=relative_time(now(),"-".tostring(days)."d@d")
| table earliest latest]
| bin _time span=1d
| stats count by _time message
| eval time=if(_time<relative_time(now(),"-1d@d"),"Count before yesterday","Count yesterday")
| xyseries message time count

View solution in original post

ldongradi_splun
Splunk Employee
Splunk Employee

Assuming you use | timechart, you can immediately after use | timewrap to let the display split by your favorite span.

For example : 

... earliest=-1d @d | timechart count span=1h | timewrap 1h
... earliest=@w | timechart count span=1h | timewrap 1d
.. earliest=-4w@w | timechart count span=1d | timewrap 1d

Timewrap will automaticaly add new series with a suffix as ...1hour_before ...2days_before  ... latest_day

0 Karma

queryaslan
Explorer

Hmm won't bin group per _time range like this:

Message_time 
failed2021-12-151
failed2021-12-141

 

Which makes it impossible to compare a lot of different messages.

So my wanted format is:

MessageCount before release timeCount after release time
failed11
0 Karma

ITWhisperer
SplunkTrust
SplunkTrust
sourcetype="my pod" level="error" 
[| makeresults
| eval days=mvappend("0","7")
| mvexpand days
| eval earliest=relative_time(now(),"-".tostring(1+days)."d@d")
| eval latest=relative_time(now(),"-".tostring(days)."d@d")
| table earliest latest]
| bin _time span=1d
| stats count by _time message
| eval time=if(_time<relative_time(now(),"-1d@d"),"Count before yesterday","Count yesterday")
| xyseries message time count

PickleRick
SplunkTrust
SplunkTrust

You have the values. Now all you need is to transform them to the proper format.

If you have them like this:

Message_timecount
failed2021-12-151
failed2021-12-141

 

You can do

| xyseries Message _time count

 To transform it to the table you want (ok, you'll still have ugly _time values so you might want to stftime it first).

ITWhisperer
SplunkTrust
SplunkTrust

To get counts for yesterday and the same day a week ago, you could do this

sourcetype="my pod" level="error" 
[| makeresults
| eval days=mvappend("0","7")
| mvexpand days
| eval earliest=relative_time(now(),"-".tostring(1+days)."d@d")
| eval latest=relative_time(now(),"-".tostring(days)."d@d")
| table earliest latest]
| bin _time span=1d
| stats count by _time message
0 Karma

PickleRick
SplunkTrust
SplunkTrust

It seems you're overthinking it a bit 😉

Just use bin if you want fixed buckets and then do your

stats count by message _time

Oh, and if you compare to "-1d@d", it's not "last week", it's "yesterday" 😉

0 Karma
Get Updates on the Splunk Community!

Announcing Scheduled Export GA for Dashboard Studio

We're excited to announce the general availability of Scheduled Export for Dashboard Studio. Starting in ...

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics GA in US-AWS!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...