Splunk Search

New field defined by time ranges

hollybross1219
Path Finder

I'm trying to create the below search with the following dimensions. I'm struggling to create the 'timephase' column.

The 'timephase' field would take the same logic as the date range pickers in the global search, but only summon the data applicable in that timephase (ie. 1 day would reflect data of subsequent columns for 1 day ago, etc). I tried to approach it with a eval case, but would run into a mutual exclusion problem (the data captured in "1 day" would be excluded from "1 week", even thought it should be counted).

Does anyone have any recommendation for approaches to this?

alt text

0 Karma
1 Solution

dmarling
Builder

If you only need those 4 groupings you can do that with a series of evals before your stats that will create the groups. Here's a run anywhere example that demonstrates the method to accomplish this:

| makeresults count=1000 
| eval random=random() % 5616000 
| eval _time=_time-random 
| sort 0 + _time 
| addinfo 
| eval timephase1=if(_time>=relative_time(info_max_time, "-1d@d"), "1 day", null()), timephase2=if(_time>=relative_time(info_max_time, "-1w@d"), "1 week", null()), timephase3=if(_time>=relative_time(info_max_time, "-1mon@d"), "1 month", null()), timephase4=if(_time>=relative_time(info_max_time, "@y"), "YTD", null()), timephase=mvappend(timephase1, timephase2, timephase3, timephase4)
| stats count by timephase
| eval sorter=case(timephase="1 day", 1, timephase="1 week", 2, timephase="1 month", 3, timephase="YTD", 4)
| sort sorter
| fields - sorter

The query starts by creating four separate fields that represent each bucket of time. This is assuming you only need the four that you have listed in your example. The timephase field is made into a multi-valued aggregation of those four fields since a single event can fall into multiple buckets. Finally the query creates a table that shows the count of events that fall into each of those buckets. You see that YTD will always equal 1,000 due to the query only creating 1,000 events. The other numbers will be random based on what the random() function produces. The sorter field is used to ensure it shows up in the order of your mock up.

I used the addinfo command to ensure that it's always doing comparative time buckets based on the maximum searched time period in case you use this query on any time period that doesn't end now()

If this comment/answer was helpful, please up vote it. Thank you.

View solution in original post

0 Karma

dmarling
Builder

If you only need those 4 groupings you can do that with a series of evals before your stats that will create the groups. Here's a run anywhere example that demonstrates the method to accomplish this:

| makeresults count=1000 
| eval random=random() % 5616000 
| eval _time=_time-random 
| sort 0 + _time 
| addinfo 
| eval timephase1=if(_time>=relative_time(info_max_time, "-1d@d"), "1 day", null()), timephase2=if(_time>=relative_time(info_max_time, "-1w@d"), "1 week", null()), timephase3=if(_time>=relative_time(info_max_time, "-1mon@d"), "1 month", null()), timephase4=if(_time>=relative_time(info_max_time, "@y"), "YTD", null()), timephase=mvappend(timephase1, timephase2, timephase3, timephase4)
| stats count by timephase
| eval sorter=case(timephase="1 day", 1, timephase="1 week", 2, timephase="1 month", 3, timephase="YTD", 4)
| sort sorter
| fields - sorter

The query starts by creating four separate fields that represent each bucket of time. This is assuming you only need the four that you have listed in your example. The timephase field is made into a multi-valued aggregation of those four fields since a single event can fall into multiple buckets. Finally the query creates a table that shows the count of events that fall into each of those buckets. You see that YTD will always equal 1,000 due to the query only creating 1,000 events. The other numbers will be random based on what the random() function produces. The sorter field is used to ensure it shows up in the order of your mock up.

I used the addinfo command to ensure that it's always doing comparative time buckets based on the maximum searched time period in case you use this query on any time period that doesn't end now()

If this comment/answer was helpful, please up vote it. Thank you.
0 Karma

hollybross1219
Path Finder

really clever @dmarling, thank you

0 Karma
Get Updates on the Splunk Community!

Routing Data to Different Splunk Indexes in the OpenTelemetry Collector

This blog post is part of an ongoing series on OpenTelemetry. The OpenTelemetry project is the second largest ...

Getting Started with AIOps: Event Correlation Basics and Alert Storm Detection in ...

Getting Started with AIOps:Event Correlation Basics and Alert Storm Detection in Splunk IT Service ...

Register to Attend BSides SPL 2022 - It's all Happening October 18!

Join like-minded individuals for technical sessions on everything Splunk!  This is a community-led and run ...