Splunk Search

How to display count of results based on time value in lookup file vs current time

TBH0
Explorer

I have a situation where I'm attempting to display a count on a dashboard of the amount of items in a lookup file whose scheduled_time_attribute is equal to or greater than the top of the current hour (XX:00-XX:59). I've attempted to utilize inputlookup with 'where' to filter by lookup file time, tried to utilize an eval function with a greater-than-or-equal-to operator and relative_time(now(),"@h")), etc. and can't figure out how I should be doing this. I've also noted that, when working with a simple greater-than operator as shown below, it behaves unpredictably, showing times that are numerically below 21:59.

Any help on this would be appreciated. Thank you!

 

| inputlookup lookup-file.csv where (scheduled_time_attribute)>21:59)

 

 

Labels (2)
0 Karma
1 Solution

yuanliu
SplunkTrust
SplunkTrust

If the lookup file contains only HH:MM in scheduled_time_attribute, a simple relative_time() will not do because now() returns the full epoc time.  The requirement is actually to look up hour of day.  You can take a shortcut, tho, like this

| inputlookup lookup-file.csv
| where scheduled_time_attribute > strftime(now(), "%H:00")

 This is taking advantage of the fact that SPL compares strings with their ASCII values. (For the above to work, scheduled_time_attribute needs to be in %H:%M, i.e., 24-hour format.)

View solution in original post

0 Karma

yuanliu
SplunkTrust
SplunkTrust

Generating commands such as inputlookup cannot use evaluation functions even if they accept a where clause.  So, you must use a where command, e.g.,

| inputlookup lookup-file.csv
| where scheduled_time_attribute > relative_time(now(),"@h")

This is provided that scheduled_time_attribute is in epoc time.  If it is a string, you need to convert according to the format.  For example, if the string is like "2022-11-30 00:00:00",

| inputlookup lookup-file.csv
| where strptime(scheduled_time_attribute, "%F %H:%M:%S") > relative_time(now(),"@h")​

The reason why the command you tried is unpredictable is also related to how scheduled_time_attribute is recorded in the lookup.  If it is of the format "%H:%M" like "05:32", you will get predictable result.

0 Karma

TBH0
Explorer

Thank you for that info - that helps a lot. Is it better practice, given this scenario, to have the corresponding epoc time available in the lookup file, or is it sufficient to use rex sort of definitions with HH:MM formats, ensuring there are two digits in the hour section (I'm assuming having 8:00, instead of 08:00, is what you're referring to in your second point)? The only issue I can see is that a new lookup would need to be uploaded each day to adjust the date to current for the epoc conversion, which isn't the biggest deal in the world.

Edit: I've modified the available times in the lookup, but your second code block still doesn't return any results. Do you have any comment on what would be the most efficient way to retrieve the count of results that occur during the current hour?

I've tried this after updating the hours to always include HH:MM to no avail:

 

| inputlookup lookup-file.csv
| where scheduled_time_attribute > relative_time(now(),"@h") AND scheduled_time_attribute < relative_time(now(),"@h+h")

 

 

 

0 Karma

yuanliu
SplunkTrust
SplunkTrust

If the lookup file contains only HH:MM in scheduled_time_attribute, a simple relative_time() will not do because now() returns the full epoc time.  The requirement is actually to look up hour of day.  You can take a shortcut, tho, like this

| inputlookup lookup-file.csv
| where scheduled_time_attribute > strftime(now(), "%H:00")

 This is taking advantage of the fact that SPL compares strings with their ASCII values. (For the above to work, scheduled_time_attribute needs to be in %H:%M, i.e., 24-hour format.)

0 Karma

TBH0
Explorer

I believe I've nearly got this figured out. These two work separately, but not when paired via AND or placed on separate lines. The first finds all rows relative to the top of the current hour (>=XX:00), and the second works to find all rows relative to the top of the next hour (X=X+1; <XX:00).

Top of hour:

 

| inputlookup lookup-file.csv
| where scheduled_time_attribute >= strftime(relative_time(now(), "@h"), "%H:00")

 

 

Top of next hour:

 

| inputlookup lookup-file.csv
| where scheduled_time_attribute < strftime(relative_time(now(), "+h@h"), "%H:00")

 

 

Any idea why these will not work when paired? It should show all from XX:00-XX:59, I believe, as they work fine separately.

0 Karma

yuanliu
SplunkTrust
SplunkTrust

To confirm, "work when paired" means scheduled_time_attribute falls between top of current hour and top of next hour?  At a glance, AND should work.  The only think I'd get rid of is relative_time() because it is not serving a useful purpose.  strftime(now(), "%H:00") returns the beginning (0 minute) of current hour.  I would interpret "top of the hour" as 0th minute.  But if you really want to use :59 of the previous hour, you can do so by subtraction, no need for another function call.

 

| makeresults
| eval scheduled_time_attribute = mvappend("05:31", "08:25", "19:10", "21:23", "22:45")
| mvexpand scheduled_time_attribute
``` above emulates inputlookup ```
| where scheduled_time_attribute > strftime(now() - 3600, "%H:59") AND scheduled_time_attribute < strftime(now(), "%H:59")

 

 Using the emulated data, I get

_timescheduled_time_attribute
2022-08-22 19:27:3519:10
0 Karma

TBH0
Explorer

I've just tried it again and it works out of nowhere. Thank you yuanliu! Not sure why it wasn't working yesterday but all seems well today, so I appreciate your help.

 

In case anyone stumbles upon this in the future, I wanted to include another way to accomplish this as well:

 

| inputlookup lookup-file.csv
| eval fields=split(scheduled_time_attribute,":")
| eval DHour=mvindex(fields,0)
| eval curHour=strftime(now(),"%H")
| where curHour=DHour
| stats count 

 


0 Karma
Get Updates on the Splunk Community!

Enhance Security Visibility with Splunk Enterprise Security 7.1 through Threat ...

(view in My Videos)Struggling with alert fatigue, lack of context, and prioritization around security ...

Troubleshooting the OpenTelemetry Collector

  In this tech talk, you’ll learn how to troubleshoot the OpenTelemetry collector - from checking the ...

Adoption of Infrastructure Monitoring at Splunk

  Splunk's Growth Engineering team showcases one of their first Splunk product adoption-Splunk Infrastructure ...