Splunk Search

Timecharting delta by multiple fields

mudragada
Path Finder

My Sample event every minute looks like this:

03/06/2017 15:19:00 -0500, app01:JVM1=12, app01:JVM2=6, app01:JVM3=9, app01:JVM4=3, app01:JVM6=7, app02:JVM1=14, app02:JVM2=0, app02:JVM3=0, app02:JVM4=25, app02:JVM5=11, app02:JVM6=4, app03:JVM1=5, app03:JVM2=9, app03:JVM3=5, app03:JVM4=26, app03:JVM5=10, app03:JVM6=2, app04:JVM1=13, app04:JVM2=4, app04:JVM3=10, app04:JVM4=16, app04:JVM5=13, app04:JVM6=0, app05:JVM1=10, app05:JVM2=47, app05:JVM3=21, app05:JVM4=15, app05:JVM5=13, app05:JVM6=2, app06:JVM1=8, app06:JVM2=8, app06:JVM3=10, app06:JVM4=8, app06:JVM5=8, app06:JVM6=6

I am able to plot the timechart per field - by using | timechart max("app*:JVM*") span=1m or avg function.
But, now I need help in writing the queries to:
1. Sort and show the top 10 key value pairs in a table - of the sum of values (off the 5 events) every 5 mins in a dashboard
2. Show the outliers off the timeframe's average in a table
3. Sort and show the top 10 key value pairs with highest delta between timeframes (every 5 mins)

In verbose mode, these keys are showing up in "Interesting Fields" but was not able to use them by field.

0 Karma
1 Solution

somesoni2
SplunkTrust
SplunkTrust

Give this a try

1) Sort and show the top 10 key value pairs in a table - of the sum of values (off the 5 events) every 5 mins in a dashboard

Your base search | timechart sum("app*:JVM*") span=5m | untable _time metrics data | sort 0 -data by _time
| dedup 10 _time | xyseries _time metrics data

2 Show the outliers off the timeframe's average in a table

Your base search | timechart avg("app*:JVM*") span=1m 
| untable _time metrics data | eventstats avg(data) as avg stdev(data) as stdev by metrics | where data>(2*stdev+avg) OR data<(avg-2*stdev)  

3 Sort and show the top 10 key value pairs with highest delta between timeframes (every 5 mins)

Your base search | timechart sum("app*:JVM*") span=5m | streamstats values(*) as *_prev current=f window=1 
| foreach *_prev [eval delta_<<MATCHSTR>>='<<MATCHSTR>>'-'<<FIELD>>'] | table _time detal_* 
| untable _time metrics data | sort 0 -data by _time
| dedup 10 _time | xyseries _time metrics data

View solution in original post

DalJeanis
SplunkTrust
SplunkTrust

Did you mean highest POSITIVE delta or highest change, plus or minus?

0 Karma

mudragada
Path Finder

I was looking for the absolute value, i.e., highest change.

0 Karma

DalJeanis
SplunkTrust
SplunkTrust

okay, then use the minor addition I commented onto somesoni2's post.

0 Karma

somesoni2
SplunkTrust
SplunkTrust

Give this a try

1) Sort and show the top 10 key value pairs in a table - of the sum of values (off the 5 events) every 5 mins in a dashboard

Your base search | timechart sum("app*:JVM*") span=5m | untable _time metrics data | sort 0 -data by _time
| dedup 10 _time | xyseries _time metrics data

2 Show the outliers off the timeframe's average in a table

Your base search | timechart avg("app*:JVM*") span=1m 
| untable _time metrics data | eventstats avg(data) as avg stdev(data) as stdev by metrics | where data>(2*stdev+avg) OR data<(avg-2*stdev)  

3 Sort and show the top 10 key value pairs with highest delta between timeframes (every 5 mins)

Your base search | timechart sum("app*:JVM*") span=5m | streamstats values(*) as *_prev current=f window=1 
| foreach *_prev [eval delta_<<MATCHSTR>>='<<MATCHSTR>>'-'<<FIELD>>'] | table _time detal_* 
| untable _time metrics data | sort 0 -data by _time
| dedup 10 _time | xyseries _time metrics data

DalJeanis
SplunkTrust
SplunkTrust

section 2, line 3 | table _time delta_*

If OP wants the highest change, positive or negative, then they can replace section 3 line 3 with this...

 | untable _time metrics data | eval absdata=abs(data)| sort 0 -absdata by _time

DalJeanis
SplunkTrust
SplunkTrust

Here's a random test data generator

| gentimes start="01/25/2017:23:00:00" end="01/25/2017:23:10:00" increment=1m 
    | eval _time = starttime | table _time 
    | eval myapp="app01 app02 app03 app04 app05 app06" 
    | eval myjvm="JVM1 JVM2 JVM3 JVM4 JVM5 JVM6"
    | makemv myapp | mvexpand myapp |makemv myjvm | mvexpand myjvm 
    | eval rand1=round(random()/100000000,0) 
    | where rand1!=12 AND rand1!=22 AND rand1>0
    | rename rand1 as count
    | eval appJVM=myapp.":".myjvm
    | eval {appJVM} = count
    | table _time "app*:JVM*"

This generates transactions over a tenminute period for all of app01 thru app06 and JVM1 thru JVM6, then it pokes a few holes in the data because your posted data has holes.

Then you can play with somesoni2's code to see what it does.

mudragada
Path Finder

Hi,

In this case, makemv and mvexpand will create multiple events, correct? In a case where I scale up for the apps and JVMs over a timeline, what would be the ideal solve?

0 Karma

DalJeanis
SplunkTrust
SplunkTrust

Correct. If you wanted all the test data together in one field, for instance _raw, you could use this code replacing the code starting with line 8 above...

 | rename rand1 as count
 | eval appJVM=myapp.":".myjvm."=".count
 | stats values(appJVM) as appJVM by _time
 | nomv appJVM
 | rename appJVM as _raw

Sorry, I don't understand the followup question.

0 Karma
Get Updates on the Splunk Community!

Index This | I am a number, but when you add ‘G’ to me, I go away. What number am I?

March 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

What’s New in Splunk App for PCI Compliance 5.3.1?

The Splunk App for PCI Compliance allows customers to extend the power of their existing Splunk solution with ...

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...