Splunk Search

Using result fields for earliest/latest time in secondary search

lukepatrick
Explorer

I have an existing search that finds fields named "RunDate" "StartTime" "EndTime" stored as part of test run summaries. The search then proceeds to convert those time values into usable Unix, via strptime:

 index="IDX1" sourcetype="SRC" ProjectName="PRJ" | eval stime = strptime(StartTime,"%m/%d/%Y %I:%M:%S %p") | eval etime = strptime(EndTime,"%m/%d/%Y %I:%M:%S %p") | table RunDate stime etime | sort RunDate desc

Now is the tricky part...
I would like a 4th column that uses the time frame in each row to perform a calculation on values coming from a different index/source.

 index="IDX2" "HOST" "data.metricId" IN (1234) | stats avg("data.metricValues{}.value") as average | eval total=average/100 

Somehow, this needs to be time constrained by "earliest=stime" & "latest=etime" for each RunDate (the results should be a series)

Is this possible? To run a secondary search/eval, using calculated values from the primary search as the earliest and latest time constraints?

I attempted to do this with a maps search, but it seems that for a maps search to work properly, there must be an overlapping field. In this case, the only thing that overlaps between the two searches are the time parameters.

Tags (1)
0 Karma
1 Solution

somesoni2
Revered Legend

You'll use map like this

index="IDX1" sourcetype="SRC" ProjectName="PRJ" 
| eval stime = strptime(StartTime,"%m/%d/%Y %I:%M:%S %p") | eval etime = strptime(EndTime,"%m/%d/%Y %I:%M:%S %p") 
| table RunDate stime etime
| map maxsearches=1000 search="search index=\"IDX2\"" "HOST" "data.metricId" IN (1234) earliest=$stime$ latest=$etime$ | stats avg(\"data.metricValues{}.value\") as average | eval total=average/100 | eval RunDate=\"$RunDate$\", stime=$stime$, etime=$etime$ | table RunDate stime etime total"

This will be slow (sometimes really slow). If there is a relationship between RunDate and stime (or etime), then this can be optimized. Do RunDate and stime/etime have any relationship?

View solution in original post

somesoni2
Revered Legend

You'll use map like this

index="IDX1" sourcetype="SRC" ProjectName="PRJ" 
| eval stime = strptime(StartTime,"%m/%d/%Y %I:%M:%S %p") | eval etime = strptime(EndTime,"%m/%d/%Y %I:%M:%S %p") 
| table RunDate stime etime
| map maxsearches=1000 search="search index=\"IDX2\"" "HOST" "data.metricId" IN (1234) earliest=$stime$ latest=$etime$ | stats avg(\"data.metricValues{}.value\") as average | eval total=average/100 | eval RunDate=\"$RunDate$\", stime=$stime$, etime=$etime$ | table RunDate stime etime total"

This will be slow (sometimes really slow). If there is a relationship between RunDate and stime (or etime), then this can be optimized. Do RunDate and stime/etime have any relationship?

lukepatrick
Explorer

index="IDX1" AND sourcetype="SRC" AND ProjectName="PRJ" | eval stime = strptime(StartTime,"%m/%d/%Y %I:%M:%S %p") | eval etime = strptime(EndTime,"%m/%d/%Y %I:%M:%S %p") | dedup RunDate | map maxsearches=150 search="search index=\"IDX2\" \"HOST\" \"NODE\" "data.metricId" IN (1234) earliest=$stime$ latest=$etime$ | stats avg(data.metricValues{}.value) AS average | eval total=average/100, RunDate=$RunDate$, stime=$stime$, etime=$etime$" | table RunDate total | sort RunDate asc

This works wonderfully directly from Search. It gets stuck on "Search is waiting for input..." as a panel. Not 100% there, but this was a huge help. Thank you somesoni2

0 Karma

lukepatrick
Explorer

There is no direct relationship between them, the three values "RunTime, StartTime and EndTime" are all unique fields that come from the opening search. Fortunately there will only ever be ~6 sets of values returned from the opening search.

I will try this out and share my results. Thank you somesoni2!

0 Karma

lukepatrick
Explorer

Well, the search runs, but it returns results from the full time range, and is not being constrained by each of the 'earliest' and 'latest' values.
I'm expecting 5 - 7 results, but am getting 50,000. I will return to this tomorrow.

0 Karma
Get Updates on the Splunk Community!

Splunk Forwarders and Forced Time Based Load Balancing

Splunk customers use universal forwarders to collect and send data to Splunk. A universal forwarder can send ...

NEW! Log Views in Splunk Observability Dashboards Gives Context From a Single Page

Today, Splunk Observability releases log views, a new feature for users to add their logs data from Splunk Log ...

Last Chance to Submit Your Paper For BSides Splunk - Deadline is August 12th!

Hello everyone! Don't wait to submit - The deadline is August 12th! We have truly missed the community so ...