Splunk Search

Using date ranges from lookup file as "spans" for charting results


I'm looking to create a chart that shows the pass/fail rate of an export process by code release dates rather than discretized time spans. Each release about a month long but they may vary by a few days and typically span between 2 months (i.e. Oct. 14th to Nov 19th).

My initial thoughts are to put the release dates into a .csv file with the following fields:


I have imported a similar .csv table with the lookup definition name "release_dates".

Then I would like to count the number of pass/fails that occurred during each release. So the end result would be a column chart with Release number on the X-axis and count on the Y-axis. My intial search to get the events (or transaction) for pass/fail counts is as follows.

index="myindex"  sourcetype="mysourcetype" | transaction key startswith="Export Job #" endswith="Exported OR Error" | eval result=if(eventtype="export_successful","Pass","Fail") 

...But I am stuck as to how I would compare the _time of these events with the date ranges in the release_dates lookup table.

Any direction would be much appreciated!

Labels (4)
Tags (4)
0 Karma


I'm assuming you're trying to find the release number for any given log event and therefore work out which release it applies to. Here's an example that shows you how you can do it, assuming you have a CSV file called dates.csv with your release date information in your example

What this does is simulate a bunch of dates in your range and give them random pass/fail attributes.

The append/eventstats/mvzip/mvexpand/rex technique will then create a row per date, but it's basically the last three lines that will give you the comparison technique to do the range checking. 

| makeresults 
| eval _time=strptime("2020-03-14", "%F")
| eval n=mvrange(1,61)
| mvexpand n
| eval _time=_time-(n*86400)
| bin _time span=1m
| stats count by _time
| append [
  | inputlookup dates.csv
| eventstats values(release) as release values(priorreleasedate) as priorreleasedate values(implementationdate) as implementationdate
| eval result=mvindex(mvappend("Pass","Fail"),random() % 2)
| eval d=mvzip(release, mvzip(priorreleasedate, implementationdate, ":"), ":")
| fields _time count d result
| mvexpand d
| rex field=d "(?<release>[^:]*):(?<priorreleasedate>[^:]*):(?<implementationdate>.*)"
| table _time release priorreleasedate implementationdate result
| eval from=strptime(priorreleasedate,"%m/%d/%Y"), to=strptime(implementationdate,"%m/%d/%Y") + 86399
| where _time>=from AND _time<=to
| stats count(eval(result="Fail")) as Fail count(eval(result="Pass")) as Pass by release

 On your use of 'transaction' you don't seem to be using any kind of id to group common records and in any case, I would always avoid transaction where possible, as it often has unintended side effects. Using stats will almost always give you the same result without the potential headaches.

Hope this gives you some pointers.


0 Karma
Get Updates on the Splunk Community!

Investigate Security and Threat Detection with VirusTotal and Splunk Integration

As security threats and their complexities surge, security analysts deal with increased challenges and ...

Observability Highlights | January 2023 Newsletter

 January 2023New Product Releases Splunk Network Explorer for Infrastructure MonitoringSplunk unveils Network ...

Security Highlights | January 2023 Newsletter

January 2023 Splunk Security Essentials (SSE) 3.7.0 ReleaseThe free Splunk Security Essentials (SSE) 3.7.0 app ...