Alerting

Alerting for entries not found in given time period

naveenalagu
Explorer

Hello good folks, 

I've this requirement, where for a given time period, I need to send out an alert if a particular 'value' doesn't come up. This is to be identified by referring to a lookup table which has the list of all possible values that can occur in a given time period. The lookup table is of the below format

TimeValue
Monday 14: [1300 - 1400]412790 AA
Monday 14: [1300 - 1400]  114556 BN
Monday 15: [1400 - 1500]243764 TY

Based on this, in the live count , for the given time period ( let's take  Monday 14: [1300 - 1400] as an example ), if I do a  stats count as Value by Time and I don't get "114556 BN" as one of the values, an alert is to be generated.

Where I'm stuck with is matching the time with the values. If I use inputlookup first, I am not able to pass the time from Master Time picker  which will not allow me to check for specific time frame ( in this case an hour ). If I use the index search first, I am able to match the time against the lookup by using | join type=left but I am not able to find the missing values which are not there in the live count but present in the lookup.

Would appreciate if I could get some advice on how to go about this. Thanks in advance!

Labels (2)
0 Karma
1 Solution

ITWhisperer
SplunkTrust
SplunkTrust

If your time periods are always 1 hour, you only need the start time and you can bin / bucket _time with span=1h, this gives you a time you can match on as well as your values.

<your index>
| bin _time as period_start span=1h
| dedup period_start Value
| eval flag = 1
| append
  [| inputlookup lookup.csv
  [ eval period_start = ``` convert your time period here ```
  | eval flag = 2]
| stats sum(flag) as flag by period_start Value
``` flag = 1 if only in index, 2 if only in lookup, or 3 if in both ```
| where flag = 2

View solution in original post

bowesmana
SplunkTrust
SplunkTrust

stats count as Value by Time cannot give you a non numeric value that you have in your lookup.

but aside from that, if you have a lookup table with time and value then you can just lookup time in the lookup to get the values.

In your stats by time, what is "time" in your example, is it a formatted time representation, the unprocessed Splunk _time field or the _time field with a | bin bucket alignment set.

If, what you want is something along these lines

search bla
| bin _time span=1h
| stats count by _time category
| eval Time=strftime(_time, "%A %d:[%H%M - ").
            strftime(relative_time(_time, "+1h"), "%H:%M]")
| eval Value=count." ".category
| lookup lookup.csv Time Value OUTPUT Value as Found
| where isnull(Found)

so that if your Time field in the lookup is Day DayOfMonth [TIME_FROM - TIME_TO] and the Value is the expected count + the category (AA/BN/TY), then the above search just does a lookup and expects to find the value - if it does not, Found will be null.

0 Karma

naveenalagu
Explorer

Apologies, my Value field is a combination of two separate field values from my index. It's to uniquely classify an event. Like you mentioned the count will not give a non numeric value. The command would be | stats count by Time Value| fields - count

For using the lookup, should I sort the lookup or the live data from the index before compare ? Reason for asking is even though I can manually confirm that there is a mismatch, the script is unable to locate that. E.g if the live data has 18 entries and the lookup as 20, the 2 missing entries are not showing up in the script.

0 Karma

bowesmana
SplunkTrust
SplunkTrust

Ah, so I missed your point somewhat in that the list contains all the values you want and you should alert if one is missing from the data.

You can do this

| stats count by Time Value
| append [
  | inputlookup lookup.csv
  ``` Filter the entries you expect here, e.g. using addinfo ```
```  | where Time is in the range you want ```
]
| stats count by Time Value
| where count=1

which is adding the rows from the lookup to the end of your found data and then doing the stats count again.

If count=1 then it has only come from the lookup.

The filtering (where...) will need to work out what time range your search covers. Use the addinfo command to get info_min_time and info_max_time fields which you can then use to filter those values from the lookup you want.

naveenalagu
Explorer

Thanks for this.

Is it possible to pass the TIme value, which is calculated in the real time search, to the inputlookup, to filter out the required 'Values' for compare?

I checked the documentation on lookup and tried the solution you shared earlier, however, I am not getting the desired results. E.g. my lookup has 20 entries and real-time count only has 18, the final result is 18 entries with Found containing values only of those present in real-time check. The two missing entries are not displayed( null values under Found ). 

Any suggestions on what might be the possible issue ?

0 Karma

bowesmana
SplunkTrust
SplunkTrust

As @ITWhisperer has said, subsearches run first, so you can't pass time from the outer to the subsearch, but the addinfo command is generally a way to know what time range you have for your search, so in the subsearch you can do something like this to remove the entries from the lookup that do NOT fit in the time range of the search

[ 
  | inputlookup lookup.csv
  | addinfo
  | eval first_time=strftime(info_min_time, "%H")
  | eval last_time=strftime(info_max_time, "%H")
  | rex field=Time "[^ ]* (?<hour>\d+)"
  | where hour>=first_time AND hour<=last_time
]

This is taking the HOUR part of your lookup Time value and comparing that to the search time range and only retaining the lookup entries that match the time range of your search, so when you combine the entries after this subsearch, only those from the lookup that are relevant to the range are collected with the real time data.

naveenalagu
Explorer

Thanks @bowesmana @ITWhisperer . I was able to concoct a solution based on our inputs.🙏

@bowesmanaIn your below solution count=1 can also denote if the value was present in the live search and not in the inputlookup. However, your solution of using append works in this case.

| stats count by Time Value
| append [
  | inputlookup lookup.csv
  ``` Filter the entries you expect here, e.g. using addinfo ```
```  | where Time is in the range you want ```
]
| stats count by Time Value
| where count=1

@ITWhispererYour solution of using a variable Flag worked for me as it also handled the scenario where a particular value was found only in the live index search but not on the lookup. Thanks for this.

<your index>
| bin _time as period_start span=1h
| dedup period_start Value
| eval flag = 1
| append
  [| inputlookup lookup.csv
  [ eval period_start = ``` convert your time period here ```
  | eval flag = 2]
| stats sum(flag) as flag by period_start Value
``` flag = 1 if only in index, 2 if only in lookup, or 3 if in both ```
| where flag = 2

 

0 Karma

bowesmana
SplunkTrust
SplunkTrust

Yes @naveenalagu you are right re count=1, in this type of solution, you normally set an indicator in each part of the search (outer+append) as @ITWhisperer has shown and then the final stats will do the evaluation to work out where the data came from.

 

0 Karma

ITWhisperer
SplunkTrust
SplunkTrust

Subsearches are executed before main searches so there is no way to pass information from the main search to the subsearch, unless you use the map command, which has its own challenges!

0 Karma

naveenalagu
Explorer

So lookup here is being considered as a sub search and hence only those events which match with the main search, are being displayed. Is my understanding correct?

0 Karma

ITWhisperer
SplunkTrust
SplunkTrust

I am not sure which "lookup" you are referring to by "here" - in both the example I gave and @bowesmana gave, we are using inputlookup in a subsearch, which will retrieve all entries from the lookup store, and since they are being appended to the pipeline of event retrieved by the main search, the events from the main search still exist. By using the stats command, as we have shown, you can effectively combine these two sets of events. In my solution, by setting the flag field to different values in the two different sets, it is possible to determine whether the events with common time and value have come from one or other or both sets of events. This is you can determine which values (from the lookup) are missing from the main search within in each timeframe.

0 Karma

naveenalagu
Explorer

Hello, 

The time is created from the below script

| bin _time span=60 
| eval Time1=strftime(_time+3600,"%A %H") 
| eval eventstart = strftime(_time, "%H") 
| eval eventend=01 
| eval eventrange = eventstart+(eventend) 
| eval eventrange=if(eventrange=-1, 23,eventrange ) 
| replace 0 with 00, 1 with 01, 2 with 02, 4 with 04, 5 with 05, 6 with 06, 7 with 07, 3 with 03, 9 with 09, 8 with 08 
| eval Time2 = Time1.": [".eventstart."00 - ".eventrange."00] " 

So, the TIME format actually is Day TIME_TOHour [TIME_FROM - TIME_TO]

0 Karma

ITWhisperer
SplunkTrust
SplunkTrust

If your time periods are always 1 hour, you only need the start time and you can bin / bucket _time with span=1h, this gives you a time you can match on as well as your values.

<your index>
| bin _time as period_start span=1h
| dedup period_start Value
| eval flag = 1
| append
  [| inputlookup lookup.csv
  [ eval period_start = ``` convert your time period here ```
  | eval flag = 2]
| stats sum(flag) as flag by period_start Value
``` flag = 1 if only in index, 2 if only in lookup, or 3 if in both ```
| where flag = 2

naveenalagu
Explorer

My lookup file already has the entry for Time, in the expected format. Should I still proceed with the time formatting ?

0 Karma

ITWhisperer
SplunkTrust
SplunkTrust

After the bin command, period_start will be an epoch (unix) time aligned to the start of the hour. In order to get a match, you should parse / reformat / convert the time from your lookup into a similarly aligned unix time. Then the stats command can match against the time and the value

0 Karma
Get Updates on the Splunk Community!

How to Monitor Google Kubernetes Engine (GKE)

We’ve looked at how to integrate Kubernetes environments with Splunk Observability Cloud, but what about ...

Index This | How can you make 45 using only 4?

October 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with this ...

Splunk Education Goes to Washington | Splunk GovSummit 2024

If you’re in the Washington, D.C. area, this is your opportunity to take your career and Splunk skills to the ...