Splunk Search

How to get a sparkline to run on a different time span?

syx093
Communicator

I am wondering if it is possible to have a sparkline run on a different time span instead of the 15 minutes that I have set for this search. What I am trying to do is create a report where we find the top 20 hosts that have the greatest Relative Standard Deviation in a 15 min or less timespan. However, we want the sparkline to be based on events that occurred 4 hours ago. The reason as to why we want this is because the sparkline does not look "pretty". Or can we take the average and standard deviation of events that occurred in the first 15 minutes while the main search is still 4 hours?

index=perfmon sourcetype="Perfmon:CPU" instance=_Total counter="% Idle Time"| eval CPU=(100-Value)
| stats sparkline(avg(CPU),1m) as CPU_trend avg(CPU) as UsageCPU stdev(CPU) as StDev by host 
| eval stdev_over_mean=round(StDev/UsageCPU,10)|eval UsageCPU=round(UsageCPU,3)| sort -stdev_over_mean | head 20|fields host, CPU_trend, UsageCPU, stdev_over_mean|rename UsageCPU as UsageCPU(%) stdev_over_mean as RelStdDev
0 Karma
1 Solution

woodcock
Esteemed Legend

The only ways to work with multiple timeframes is to use a subsearch or have the main/outer search be the broadest time and have different slices handled in the search. This example is of the former sort and uses append assuming the main/outer search is using "last 15 minutes":

index=perfmon sourcetype="Perfmon:CPU" instance=_Total counter="% Idle Time"| eval CPU=(100-Value) | stats avg(CPU) as UsageCPU stdev(CPU) as StDev by host | eval stdev_over_mean=round(StDev/UsageCPU,10)|eval UsageCPU=round(UsageCPU,3)| sort -stdev_over_mean | head 20|fields host, CPU_trend, UsageCPU, stdev_over_mean|rename UsageCPU as UsageCPU(%) stdev_over_mean as RelStdDev | sort 0 host | appendcols [earliest=-4h index=perfmon sourcetype="Perfmon:CPU" instance=_Total counter="% Idle Time"| eval CPU=(100-Value) | stats sparkline(avg(CPU),1m) as CPU_trend by host | sort 0 host]

View solution in original post

woodcock
Esteemed Legend

The only ways to work with multiple timeframes is to use a subsearch or have the main/outer search be the broadest time and have different slices handled in the search. This example is of the former sort and uses append assuming the main/outer search is using "last 15 minutes":

index=perfmon sourcetype="Perfmon:CPU" instance=_Total counter="% Idle Time"| eval CPU=(100-Value) | stats avg(CPU) as UsageCPU stdev(CPU) as StDev by host | eval stdev_over_mean=round(StDev/UsageCPU,10)|eval UsageCPU=round(UsageCPU,3)| sort -stdev_over_mean | head 20|fields host, CPU_trend, UsageCPU, stdev_over_mean|rename UsageCPU as UsageCPU(%) stdev_over_mean as RelStdDev | sort 0 host | appendcols [earliest=-4h index=perfmon sourcetype="Perfmon:CPU" instance=_Total counter="% Idle Time"| eval CPU=(100-Value) | stats sparkline(avg(CPU),1m) as CPU_trend by host | sort 0 host]

View solution in original post

syx093
Communicator

Thank you good sir you are a gentleman and a scholar.

0 Karma
Take the 2021 Splunk Career Survey

Help us learn about how Splunk has
impacted your career by taking the 2021 Splunk Career Survey.

Earn $50 in Amazon cash!