Splunk Search

How to get a sparkline to run on a different time span?

syx093
Communicator

I am wondering if it is possible to have a sparkline run on a different time span instead of the 15 minutes that I have set for this search. What I am trying to do is create a report where we find the top 20 hosts that have the greatest Relative Standard Deviation in a 15 min or less timespan. However, we want the sparkline to be based on events that occurred 4 hours ago. The reason as to why we want this is because the sparkline does not look "pretty". Or can we take the average and standard deviation of events that occurred in the first 15 minutes while the main search is still 4 hours?

index=perfmon sourcetype="Perfmon:CPU" instance=_Total counter="% Idle Time"| eval CPU=(100-Value)
| stats sparkline(avg(CPU),1m) as CPU_trend avg(CPU) as UsageCPU stdev(CPU) as StDev by host 
| eval stdev_over_mean=round(StDev/UsageCPU,10)|eval UsageCPU=round(UsageCPU,3)| sort -stdev_over_mean | head 20|fields host, CPU_trend, UsageCPU, stdev_over_mean|rename UsageCPU as UsageCPU(%) stdev_over_mean as RelStdDev
0 Karma
1 Solution

woodcock
Esteemed Legend

The only ways to work with multiple timeframes is to use a subsearch or have the main/outer search be the broadest time and have different slices handled in the search. This example is of the former sort and uses append assuming the main/outer search is using "last 15 minutes":

index=perfmon sourcetype="Perfmon:CPU" instance=_Total counter="% Idle Time"| eval CPU=(100-Value) | stats avg(CPU) as UsageCPU stdev(CPU) as StDev by host | eval stdev_over_mean=round(StDev/UsageCPU,10)|eval UsageCPU=round(UsageCPU,3)| sort -stdev_over_mean | head 20|fields host, CPU_trend, UsageCPU, stdev_over_mean|rename UsageCPU as UsageCPU(%) stdev_over_mean as RelStdDev | sort 0 host | appendcols [earliest=-4h index=perfmon sourcetype="Perfmon:CPU" instance=_Total counter="% Idle Time"| eval CPU=(100-Value) | stats sparkline(avg(CPU),1m) as CPU_trend by host | sort 0 host]

View solution in original post

woodcock
Esteemed Legend

The only ways to work with multiple timeframes is to use a subsearch or have the main/outer search be the broadest time and have different slices handled in the search. This example is of the former sort and uses append assuming the main/outer search is using "last 15 minutes":

index=perfmon sourcetype="Perfmon:CPU" instance=_Total counter="% Idle Time"| eval CPU=(100-Value) | stats avg(CPU) as UsageCPU stdev(CPU) as StDev by host | eval stdev_over_mean=round(StDev/UsageCPU,10)|eval UsageCPU=round(UsageCPU,3)| sort -stdev_over_mean | head 20|fields host, CPU_trend, UsageCPU, stdev_over_mean|rename UsageCPU as UsageCPU(%) stdev_over_mean as RelStdDev | sort 0 host | appendcols [earliest=-4h index=perfmon sourcetype="Perfmon:CPU" instance=_Total counter="% Idle Time"| eval CPU=(100-Value) | stats sparkline(avg(CPU),1m) as CPU_trend by host | sort 0 host]

syx093
Communicator

Thank you good sir you are a gentleman and a scholar.

0 Karma
Get Updates on the Splunk Community!

Technical Workshop Series: Splunk Data Management and SPL2 | Register here!

Hey, Splunk Community! Ready to take your data management skills to the next level? Join us for a 3-part ...

Spotting Financial Fraud in the Haystack: A Guide to Behavioral Analytics with Splunk

In today's digital financial ecosystem, security teams face an unprecedented challenge. The sheer volume of ...

Solve Problems Faster with New, Smarter AI and Integrations in Splunk Observability

Solve Problems Faster with New, Smarter AI and Integrations in Splunk Observability As businesses scale ...