Dashboards & Visualizations

How to extract the duration of timespan in a dashboard?

hettervik
Builder

Hi,

I have a dashboard where I'm running a timechart search with tstats. I want to increase the default number of bins for the tstats command, so that if I search for the last 7 days, I get more than only 7 bins. I have a solution to this that works almost fine, shown below.

| tstats local=f prestats=f count WHERE index=main BY _indextime, _time [search index=main | head  1 | eval span=tostring(ceil((now()-relative_time(now(), "$time_token.earliest$"))/500))."s" | return span] | eval diff=_indextime-_time | fields - count | timechart bins=500 cont=t avg(diff) AS "Average", median(diff) As "Median" | eval Threshold=3600

The sub-search here is the key part. It extracts the duration from the sat time with the time picker, and divides it by 500. For example, if I search for the last 7 days, the returned span for tstats will be 1331s. I'll copy it inn below.

[search index=main | head  1 | eval span=tostring(ceil((now()-relative_time(now(), "$time_token.earliest$"))/500))."s" | return span]

Two problems:

  1. I rely on the sat time picker being in the form of "last x days/hours". For example, if someone wants to see data for a certain day three weeks ago, it wont work. The span sub-search will in this case return (three weeks to sec)/500, when it should return (on day to sec)/500. In the sub-search, I can't change "now()" to "$time_token.latest$" either, because in a case like "last 7 days", $time_token.latest$ will return the string "now", which doesn't work with the relative_time command.
  2. Is there a prettier way of returning the span than doing the whole "search index=main | head 1"-thing? Starting the sub-search with just "| eval" doesn't seem to work.

Any help is greatly appreciated, thanks!

0 Karma
1 Solution

somesoni2
Revered Legend

Assuming the timerange picker is applied to your main search (with tstats) as well, you can replace your span subsearch like this

 | tstats local=f prestats=f count WHERE index=main BY _indextime, _time [| gentimes start=-1 | addinfo  | eval span=tostring(ceil((info_max_time-info_min_time)/500))."s" | return span] | eval diff=_indextime-_time | fields - count | timechart bins=500 cont=t avg(diff) AS "Average", median(diff) As "Median" | eval Threshold=3600

The key here is addinfo command which extracts current time range into info_min_time (earliest) and info_max_time (latest) fields.

View solution in original post

somesoni2
Revered Legend

Assuming the timerange picker is applied to your main search (with tstats) as well, you can replace your span subsearch like this

 | tstats local=f prestats=f count WHERE index=main BY _indextime, _time [| gentimes start=-1 | addinfo  | eval span=tostring(ceil((info_max_time-info_min_time)/500))."s" | return span] | eval diff=_indextime-_time | fields - count | timechart bins=500 cont=t avg(diff) AS "Average", median(diff) As "Median" | eval Threshold=3600

The key here is addinfo command which extracts current time range into info_min_time (earliest) and info_max_time (latest) fields.

hettervik
Builder

Works perfect, thank you!

Though, someone recommended me to use makeresults instead of gentimes. Is there any practical difference?

0 Karma

woodcock
Esteemed Legend

Exactly right.

0 Karma

sbbadri
Motivator

it might help

eval latest_time = $time_token.latest$ | eval span=tostring(ceil((latest_time-relative_time(now(), "$time_token.earliest$"))/500))."s"

0 Karma
Career Survey
First 500 qualified respondents will receive a $20 gift card! Tell us about your professional Splunk journey.
Get Updates on the Splunk Community!

Tech Talk Recap | Mastering Threat Hunting

Mastering Threat HuntingDive into the world of threat hunting, exploring the key differences between ...

Observability for AI Applications: Troubleshooting Latency

If you’re working with proprietary company data, you’re probably going to have a locally hosted LLM or many ...

Splunk AI Assistant for SPL vs. ChatGPT: Which One is Better?

In the age of AI, every tool promises to make our lives easier. From summarizing content to writing code, ...