Splunk IT Service Intelligence

using loadjob to provide earliest and latest values for search

xtraMedium
New Member

I have a scheduled report that returns the busiest hour over the last 7 days. I then manually run a report on the response times during that peak hour.
In an effort to automate my task, I tried combining the searches with a subsearch, but the dataset is too large and I get the wrong busiest hour. I'd like to use loadjob of the original report to supply the earliest and latest values for the search into the second scheduled report.
the first report:
index=a, source=b
|bin span=1h _time
|convert timeformat="%Y-%m-%d:%H" ctime(_time) AS Date
|stats count as ops values(_time) as time by Date
|sort -ops
|stats first(time) as highhour
|eval earliest=highhour
|eval latest=relative_time(highhour,"+3599s")
|fields earliest,latest
|format "(" "(" "" ")" "" ")"
returns:
earliest latest search
( ( earliest="1531321200" latest="1531324799.000000" ) )

I've tried:
| loadjob "jobname"
|join search
[search index=a source=b |stats avg(responsetimes)

Any suggestions on how to get them to agree with each other?

0 Karma

woodcock
Esteemed Legend

Doing earliest and latest in a subsearch is tricky and requires special handling, including only using integer values. Try this for your subsearch; it will work.

[search index=a, source=b
|bin span=1h _time
|convert timeformat="%Y-%m-%d:%H" ctime(_time) AS Date
|stats count as ops values(_time) as time by Date
|sort -ops
|stats first(time) as highhour
|eval earliest=highhour
|eval latest=relative_time(highhour,"+3599s")
|fields earliest,latest
| format "" "" "" "" "" ""
| rex field=search mode=sed "s/\"//g"]
0 Karma

xtraMedium
New Member

How do I join this subsearch with my stats search to use the earliest and latest fields?

0 Karma

xtraMedium
New Member

I found my typo, as a subsearch it doesn't bring back the correct hour, I'm investigating why, the job inspector is not indicating an error.

0 Karma

woodcock
Esteemed Legend

You should UpVote any helpful answers and click Accept on the one that got you all the way to something that works.

0 Karma

DalJeanis
Legend

Step 1) Run this manually ...

index=a  source=b  earliest=-1d@d latest=@d
| bin span=1h _time
| stats count as ops by _time 
| sort - ops
| head 1 
| table _time 
| rename _time as earliest
| eval latest=earliest+3600

Step 2) Validate that you have results of one event with an epoch _time number in each of earliest and latest.

Step 3) Run the above as a savedsearch under any available user and app

Step 4) Using the username and app that the savedsearch is saved and run in, try this...

index=a source=b
     [| loadjob "username.appname.searchname"] 
|stats avg(responsetimes)

Another approach...

Now try running this...

index=a  source=b  earliest=-1d@d latest=@d
| bin span=1h _time
| stats count as ops avg(responsetimes) as avgresp by _time 
| sort - ops
| head 1 
| table _time avgresp

Losing an entire second from your data is not advised. Use 3600, and then if you think large numbers of items come in on exactly the final second, you can use addinfo and throw away any that exactly match your latest (info_max_time).

0 Karma

xtraMedium
New Member

I've not been able to get the first approach to work, however the second approach almost works. It's probably a better approach, however what I omitted from my criteria since I was expecting a solution including returning earliest and latest is how i have to separate the response times by type, which involves some mvindex calls.

I'll play around with the second approach you provided, and see if I can make it fit. And I heed your advice to use 3600.

0 Karma
Get Updates on the Splunk Community!

Introducing Splunk Enterprise 9.2

WATCH HERE! Watch this Tech Talk to learn about the latest features and enhancements shipped in the new Splunk ...

Adoption of RUM and APM at Splunk

    Unleash the power of Splunk Observability   Watch Now In this can't miss Tech Talk! The Splunk Growth ...

Routing logs with Splunk OTel Collector for Kubernetes

The Splunk Distribution of the OpenTelemetry (OTel) Collector is a product that provides a way to ingest ...