Splunk Search

Search timerange within custom time

skuller
Engager

I am trying to create an alert to check for spikes in a record that is created once a minute with a number of created objects. This is the query I am currently using and it works fine to get what I want.

index=metrics sourcetype=created_regcart earliest=-1m latest=now | rename created_regcarts as nowRegCarts | join type=outer sorucetype [search index=metrics sourcetype=created_regcart earliest=-2m latest=-1m | rename created_regcarts as thenRegCarts] | eval percent=(((nowRegCarts-thenRegCarts)/thenRegCarts)*100)

The issue I am facing is I am using the Splunk API to check for fired alerts and creating a link so people can see the results. I don't want to use the job SID to show the results so I can have the job's expire after a reasonable time and still be able to view the results.

I am currently adding the time in seconds to the end of the query when I create a link so it will look like

[Splunk Link]/flashtimeline?q=[Query]&earliest=1407938940&latest=1407939240

The times I am using inside the query overwrite the earliest and latest time in the link and I am wondering if there is anyway to look at the last 2 minutes of a search based on the custom time added.

Thanks for all the help.

0 Karma
1 Solution

martin_mueller
SplunkTrust
SplunkTrust

I assume your root cause is the use of two explicit time ranges in the search? If so, you can rewrite the entire search like this:

  index=metrics sourcetype=created_regcart
| addinfo
| eval then_or_now = if(_time < (info_max_time+info_min_time)/2, "then", "now")
| eval {then_or_now}RegCarts = created_regcarts
| stats avg(*RegCarts) as *RegCarts
| eval percent = (((nowRegCarts-thenRegCarts)/thenRegCarts)*100)

That'll calculate the middle of the time range used and categorize events into "then" if they happen in the first half of the time range and "now" if they happen in the second half of the time range. An average is calculated over all "then" and "now" events separately, usually you'll just have one event feeding that average to get the actual value. Percent is calculated as before.

You can have this run as an alert over -3m@m to -m@m (allowing a minute's delay for slowly incoming data) or over any absolute time range when viewed later, the search itself won't care.

View solution in original post

martin_mueller
SplunkTrust
SplunkTrust

I assume your root cause is the use of two explicit time ranges in the search? If so, you can rewrite the entire search like this:

  index=metrics sourcetype=created_regcart
| addinfo
| eval then_or_now = if(_time < (info_max_time+info_min_time)/2, "then", "now")
| eval {then_or_now}RegCarts = created_regcarts
| stats avg(*RegCarts) as *RegCarts
| eval percent = (((nowRegCarts-thenRegCarts)/thenRegCarts)*100)

That'll calculate the middle of the time range used and categorize events into "then" if they happen in the first half of the time range and "now" if they happen in the second half of the time range. An average is calculated over all "then" and "now" events separately, usually you'll just have one event feeding that average to get the actual value. Percent is calculated as before.

You can have this run as an alert over -3m@m to -m@m (allowing a minute's delay for slowly incoming data) or over any absolute time range when viewed later, the search itself won't care.

skuller
Engager

Thanks so much!

0 Karma
Get Updates on the Splunk Community!

Unlock Database Monitoring with Splunk Observability Cloud

  In today’s fast-paced digital landscape, even minor database slowdowns can disrupt user experiences and ...

Purpose in Action: How Splunk Is Helping Power an Inclusive Future for All

At Cisco, purpose isn’t a tagline—it’s a commitment. Cisco’s FY25 Purpose Report outlines how the company is ...

[Upcoming Webinar] Demo Day: Transforming IT Operations with Splunk

Join us for a live Demo Day at the Cisco Store on January 21st 10:00am - 11:00am PST In the fast-paced world ...