Splunk Search

How to get a sparkline to run on a different time span?

syx093
Communicator

I am wondering if it is possible to have a sparkline run on a different time span instead of the 15 minutes that I have set for this search. What I am trying to do is create a report where we find the top 20 hosts that have the greatest Relative Standard Deviation in a 15 min or less timespan. However, we want the sparkline to be based on events that occurred 4 hours ago. The reason as to why we want this is because the sparkline does not look "pretty". Or can we take the average and standard deviation of events that occurred in the first 15 minutes while the main search is still 4 hours?

index=perfmon sourcetype="Perfmon:CPU" instance=_Total counter="% Idle Time"| eval CPU=(100-Value)
| stats sparkline(avg(CPU),1m) as CPU_trend avg(CPU) as UsageCPU stdev(CPU) as StDev by host 
| eval stdev_over_mean=round(StDev/UsageCPU,10)|eval UsageCPU=round(UsageCPU,3)| sort -stdev_over_mean | head 20|fields host, CPU_trend, UsageCPU, stdev_over_mean|rename UsageCPU as UsageCPU(%) stdev_over_mean as RelStdDev
0 Karma
1 Solution

woodcock
Esteemed Legend

The only ways to work with multiple timeframes is to use a subsearch or have the main/outer search be the broadest time and have different slices handled in the search. This example is of the former sort and uses append assuming the main/outer search is using "last 15 minutes":

index=perfmon sourcetype="Perfmon:CPU" instance=_Total counter="% Idle Time"| eval CPU=(100-Value) | stats avg(CPU) as UsageCPU stdev(CPU) as StDev by host | eval stdev_over_mean=round(StDev/UsageCPU,10)|eval UsageCPU=round(UsageCPU,3)| sort -stdev_over_mean | head 20|fields host, CPU_trend, UsageCPU, stdev_over_mean|rename UsageCPU as UsageCPU(%) stdev_over_mean as RelStdDev | sort 0 host | appendcols [earliest=-4h index=perfmon sourcetype="Perfmon:CPU" instance=_Total counter="% Idle Time"| eval CPU=(100-Value) | stats sparkline(avg(CPU),1m) as CPU_trend by host | sort 0 host]

View solution in original post

woodcock
Esteemed Legend

The only ways to work with multiple timeframes is to use a subsearch or have the main/outer search be the broadest time and have different slices handled in the search. This example is of the former sort and uses append assuming the main/outer search is using "last 15 minutes":

index=perfmon sourcetype="Perfmon:CPU" instance=_Total counter="% Idle Time"| eval CPU=(100-Value) | stats avg(CPU) as UsageCPU stdev(CPU) as StDev by host | eval stdev_over_mean=round(StDev/UsageCPU,10)|eval UsageCPU=round(UsageCPU,3)| sort -stdev_over_mean | head 20|fields host, CPU_trend, UsageCPU, stdev_over_mean|rename UsageCPU as UsageCPU(%) stdev_over_mean as RelStdDev | sort 0 host | appendcols [earliest=-4h index=perfmon sourcetype="Perfmon:CPU" instance=_Total counter="% Idle Time"| eval CPU=(100-Value) | stats sparkline(avg(CPU),1m) as CPU_trend by host | sort 0 host]

syx093
Communicator

Thank you good sir you are a gentleman and a scholar.

0 Karma
Get Updates on the Splunk Community!

Dashboards: Hiding charts while search is being executed and other uses for tokens

There are a couple of features of SimpleXML / Classic dashboards that can be used to enhance the user ...

Splunk Observability Cloud's AI Assistant in Action Series: Explaining Metrics and ...

This is the fourth post in the Splunk Observability Cloud’s AI Assistant in Action series that digs into how ...

Brains, Bytes, and Boston: Learn from the Best at .conf25

When you think of Boston, you might picture colonial charm, world-class universities, or even the crack of a ...