Splunk Search

Search timerange within custom time

skuller
Engager

I am trying to create an alert to check for spikes in a record that is created once a minute with a number of created objects. This is the query I am currently using and it works fine to get what I want.

index=metrics sourcetype=created_regcart earliest=-1m latest=now | rename created_regcarts as nowRegCarts | join type=outer sorucetype [search index=metrics sourcetype=created_regcart earliest=-2m latest=-1m | rename created_regcarts as thenRegCarts] | eval percent=(((nowRegCarts-thenRegCarts)/thenRegCarts)*100)

The issue I am facing is I am using the Splunk API to check for fired alerts and creating a link so people can see the results. I don't want to use the job SID to show the results so I can have the job's expire after a reasonable time and still be able to view the results.

I am currently adding the time in seconds to the end of the query when I create a link so it will look like

[Splunk Link]/flashtimeline?q=[Query]&earliest=1407938940&latest=1407939240

The times I am using inside the query overwrite the earliest and latest time in the link and I am wondering if there is anyway to look at the last 2 minutes of a search based on the custom time added.

Thanks for all the help.

0 Karma
1 Solution

martin_mueller
SplunkTrust
SplunkTrust

I assume your root cause is the use of two explicit time ranges in the search? If so, you can rewrite the entire search like this:

  index=metrics sourcetype=created_regcart
| addinfo
| eval then_or_now = if(_time < (info_max_time+info_min_time)/2, "then", "now")
| eval {then_or_now}RegCarts = created_regcarts
| stats avg(*RegCarts) as *RegCarts
| eval percent = (((nowRegCarts-thenRegCarts)/thenRegCarts)*100)

That'll calculate the middle of the time range used and categorize events into "then" if they happen in the first half of the time range and "now" if they happen in the second half of the time range. An average is calculated over all "then" and "now" events separately, usually you'll just have one event feeding that average to get the actual value. Percent is calculated as before.

You can have this run as an alert over -3m@m to -m@m (allowing a minute's delay for slowly incoming data) or over any absolute time range when viewed later, the search itself won't care.

View solution in original post

martin_mueller
SplunkTrust
SplunkTrust

I assume your root cause is the use of two explicit time ranges in the search? If so, you can rewrite the entire search like this:

  index=metrics sourcetype=created_regcart
| addinfo
| eval then_or_now = if(_time < (info_max_time+info_min_time)/2, "then", "now")
| eval {then_or_now}RegCarts = created_regcarts
| stats avg(*RegCarts) as *RegCarts
| eval percent = (((nowRegCarts-thenRegCarts)/thenRegCarts)*100)

That'll calculate the middle of the time range used and categorize events into "then" if they happen in the first half of the time range and "now" if they happen in the second half of the time range. An average is calculated over all "then" and "now" events separately, usually you'll just have one event feeding that average to get the actual value. Percent is calculated as before.

You can have this run as an alert over -3m@m to -m@m (allowing a minute's delay for slowly incoming data) or over any absolute time range when viewed later, the search itself won't care.

skuller
Engager

Thanks so much!

0 Karma
Get Updates on the Splunk Community!

More Control Over Your Monitoring Costs with Archived Metrics!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...

New in Observability Cloud - Explicit Bucket Histograms

Splunk introduces native support for histograms as a metric data type within Observability Cloud with Explicit ...

Updated Team Landing Page in Splunk Observability

We’re making some changes to the team landing page in Splunk Observability, based on your feedback. The ...