Splunk Search

How to get a sparkline to run on a different time span?

syx093
Communicator

I am wondering if it is possible to have a sparkline run on a different time span instead of the 15 minutes that I have set for this search. What I am trying to do is create a report where we find the top 20 hosts that have the greatest Relative Standard Deviation in a 15 min or less timespan. However, we want the sparkline to be based on events that occurred 4 hours ago. The reason as to why we want this is because the sparkline does not look "pretty". Or can we take the average and standard deviation of events that occurred in the first 15 minutes while the main search is still 4 hours?

index=perfmon sourcetype="Perfmon:CPU" instance=_Total counter="% Idle Time"| eval CPU=(100-Value)
| stats sparkline(avg(CPU),1m) as CPU_trend avg(CPU) as UsageCPU stdev(CPU) as StDev by host 
| eval stdev_over_mean=round(StDev/UsageCPU,10)|eval UsageCPU=round(UsageCPU,3)| sort -stdev_over_mean | head 20|fields host, CPU_trend, UsageCPU, stdev_over_mean|rename UsageCPU as UsageCPU(%) stdev_over_mean as RelStdDev
0 Karma
1 Solution

woodcock
Esteemed Legend

The only ways to work with multiple timeframes is to use a subsearch or have the main/outer search be the broadest time and have different slices handled in the search. This example is of the former sort and uses append assuming the main/outer search is using "last 15 minutes":

index=perfmon sourcetype="Perfmon:CPU" instance=_Total counter="% Idle Time"| eval CPU=(100-Value) | stats avg(CPU) as UsageCPU stdev(CPU) as StDev by host | eval stdev_over_mean=round(StDev/UsageCPU,10)|eval UsageCPU=round(UsageCPU,3)| sort -stdev_over_mean | head 20|fields host, CPU_trend, UsageCPU, stdev_over_mean|rename UsageCPU as UsageCPU(%) stdev_over_mean as RelStdDev | sort 0 host | appendcols [earliest=-4h index=perfmon sourcetype="Perfmon:CPU" instance=_Total counter="% Idle Time"| eval CPU=(100-Value) | stats sparkline(avg(CPU),1m) as CPU_trend by host | sort 0 host]

View solution in original post

woodcock
Esteemed Legend

The only ways to work with multiple timeframes is to use a subsearch or have the main/outer search be the broadest time and have different slices handled in the search. This example is of the former sort and uses append assuming the main/outer search is using "last 15 minutes":

index=perfmon sourcetype="Perfmon:CPU" instance=_Total counter="% Idle Time"| eval CPU=(100-Value) | stats avg(CPU) as UsageCPU stdev(CPU) as StDev by host | eval stdev_over_mean=round(StDev/UsageCPU,10)|eval UsageCPU=round(UsageCPU,3)| sort -stdev_over_mean | head 20|fields host, CPU_trend, UsageCPU, stdev_over_mean|rename UsageCPU as UsageCPU(%) stdev_over_mean as RelStdDev | sort 0 host | appendcols [earliest=-4h index=perfmon sourcetype="Perfmon:CPU" instance=_Total counter="% Idle Time"| eval CPU=(100-Value) | stats sparkline(avg(CPU),1m) as CPU_trend by host | sort 0 host]

syx093
Communicator

Thank you good sir you are a gentleman and a scholar.

0 Karma
Get Updates on the Splunk Community!

.conf24 | Registration Open!

Hello, hello! I come bearing good news: Registration for .conf24 is now open!   conf is Splunk’s rad annual ...

ICYMI - Check out the latest releases of Splunk Edge Processor

Splunk is pleased to announce the latest enhancements to Splunk Edge Processor.  HEC Receiver authorization ...

Introducing the 2024 SplunkTrust!

Hello, Splunk Community! We are beyond thrilled to announce our newest group of SplunkTrust members!  The ...