Splunk Search

Counting non-events

vbarna
Engager

Hi all,

I have data of movement sensors. Each sensor sends data when it detects that someone is near the sensor. I would like to count per hour the amount of sensors that have detected 1 person during that hour, 2 people during that hour, 3 people... The command that I execute is:

[..] bucket _time span=1h | eventstats count as num_detections by sensor,_time | stats count(sensor) by num_detections | sort num_detections

The outcome is a table like:

num_detections ............ count(sensor)

1 ..................................... 600
2 ..................................... 650
3 ..................................... 800
...

I get what I want except for the value of the amount of sensors that have detected 0 persons. This is due to that I'm counting events, so, for the 0 I need to add something that counts the number of sensors that have not registered any detection (so counting the non-events). Does anybody have a recommendation of which would be the better way to do it? Should I totally re-do my command from another approach?
One of the options that I was thinking is to somehow do total_num_of_sensors - num_of_sensors_that_have_detected_something. However, how could I include that in the outcome?

Thanks in advance

0 Karma
1 Solution

acharlieh
Influencer

I have a few different thoughts for you:


If you're searching a timespan that covers multiple hours, and during which you're fairly certain all sensors have reported at least one detection during that time:

... | timechart span=1h limit=0 count by sensor 
    | untable _time sensor detections 
    | chart limit=0 count over _time by detections

This will give you a chart of the number of sensors that have had a given number of detections in each hour for all hours in the search timeframe. Assuming each sensor has reported at least once for the search timeframe, then for any hour where they did not register a detection will be 0. How this works is first we use timechart to get the number of detections for every sensor in each hour. After this command, the columns are sensors, and the rows are hours, and the values are the number of detections (which will be 0 if a given sensor didn't report in a particular hour. However to be a column, the sensor must have shown up at least once during some hour in the search window.

We then use untable to take the tabular data created by timechart, and turn it into stats-like data:

E.g. instead of

_time  sensor1 sensor2 sensor3 ...
12:00     0       2       1
 1:00     1       2       0
 ...

Our results turn into:

_time sensor  detections
12:00 sensor1     0
12:00 sensor2     2
12:00 sensor3     1
12:00   ...      ...
 1:00 sensor1     1
 1:00 sensor2     2
 1:00 sensor3     0
  ...   ...      ...

From here we're able to use chart to turn this stats like data back into a chart, but this time based on the number of sensors that had each detection each hour.


Now if you were searching only a specific hour, and the number of sensors is a known fixed number such as 3600, and you want the counts for that hour you could do something like this:

... | stats count as detections by sensor 
    | stats count as sensors by detections
    | appendpipe [stats sum(sensors) as sensors | eval sensors=3600-sensors | eval detections=0]
    | sort detections

Altering that search a bit with a subsearch, we could dynamically get total number of sensors seen with a search like this:

... | stats count as detections by sensor 
    | stats count as sensors by detections
    | appendpipe 
        [ stats sum(sensors) as sensors 
        | eval sensors = [ search ... earliest=0 [noop | stats count | addinfo | rename info_max_time as latest | return latest]
            | stats dc(sensor) as count
            | return $count] - sensors
    | eval detections=0 ]
    | sort detections

The subsearch [noop | stats count | addinfo | rename info_max_time as latest | return latest] simply gets the latest time selected from the selected window and returns it to its parent search as latest="######"

It's parent subsearch [ search ... earliest=0 [] | stats dc(sensor) as count | return $count] where ... is the same base search as the outer search and [] is the previous subsearch, gets the count of sensors seen from January 1, 1970 (unix epoch) until the end of the search time window and returns that number to the base search.

This assumes every sensor has reported in at least once to Splunk at some time, and that data is still live in splunk (no frozen buckets!). Subsearches additionally give a number of restrictions on performance notably a maximum execution time, so depending on how much data you have to scan to get this number and how it's indexed, this exact subsearch might not be the best method. It would be faster if you could lookup the value, (as others have mentioned, you could use saved searches (with the same assumption of every sensor reporting at least once over all time), or other means to build lookup files, or use dbconnect to lookup in a database), and then pull the lookup value into your search would likely be much faster than re-searching your data to calculate this number).

Hope these give you some ideas!

View solution in original post

acharlieh
Influencer

I have a few different thoughts for you:


If you're searching a timespan that covers multiple hours, and during which you're fairly certain all sensors have reported at least one detection during that time:

... | timechart span=1h limit=0 count by sensor 
    | untable _time sensor detections 
    | chart limit=0 count over _time by detections

This will give you a chart of the number of sensors that have had a given number of detections in each hour for all hours in the search timeframe. Assuming each sensor has reported at least once for the search timeframe, then for any hour where they did not register a detection will be 0. How this works is first we use timechart to get the number of detections for every sensor in each hour. After this command, the columns are sensors, and the rows are hours, and the values are the number of detections (which will be 0 if a given sensor didn't report in a particular hour. However to be a column, the sensor must have shown up at least once during some hour in the search window.

We then use untable to take the tabular data created by timechart, and turn it into stats-like data:

E.g. instead of

_time  sensor1 sensor2 sensor3 ...
12:00     0       2       1
 1:00     1       2       0
 ...

Our results turn into:

_time sensor  detections
12:00 sensor1     0
12:00 sensor2     2
12:00 sensor3     1
12:00   ...      ...
 1:00 sensor1     1
 1:00 sensor2     2
 1:00 sensor3     0
  ...   ...      ...

From here we're able to use chart to turn this stats like data back into a chart, but this time based on the number of sensors that had each detection each hour.


Now if you were searching only a specific hour, and the number of sensors is a known fixed number such as 3600, and you want the counts for that hour you could do something like this:

... | stats count as detections by sensor 
    | stats count as sensors by detections
    | appendpipe [stats sum(sensors) as sensors | eval sensors=3600-sensors | eval detections=0]
    | sort detections

Altering that search a bit with a subsearch, we could dynamically get total number of sensors seen with a search like this:

... | stats count as detections by sensor 
    | stats count as sensors by detections
    | appendpipe 
        [ stats sum(sensors) as sensors 
        | eval sensors = [ search ... earliest=0 [noop | stats count | addinfo | rename info_max_time as latest | return latest]
            | stats dc(sensor) as count
            | return $count] - sensors
    | eval detections=0 ]
    | sort detections

The subsearch [noop | stats count | addinfo | rename info_max_time as latest | return latest] simply gets the latest time selected from the selected window and returns it to its parent search as latest="######"

It's parent subsearch [ search ... earliest=0 [] | stats dc(sensor) as count | return $count] where ... is the same base search as the outer search and [] is the previous subsearch, gets the count of sensors seen from January 1, 1970 (unix epoch) until the end of the search time window and returns that number to the base search.

This assumes every sensor has reported in at least once to Splunk at some time, and that data is still live in splunk (no frozen buckets!). Subsearches additionally give a number of restrictions on performance notably a maximum execution time, so depending on how much data you have to scan to get this number and how it's indexed, this exact subsearch might not be the best method. It would be faster if you could lookup the value, (as others have mentioned, you could use saved searches (with the same assumption of every sensor reporting at least once over all time), or other means to build lookup files, or use dbconnect to lookup in a database), and then pull the lookup value into your search would likely be much faster than re-searching your data to calculate this number).

Hope these give you some ideas!

vbarna
Engager

Thanks for your comprehensive answer @acharlieh. Your first suggestion with the timechart and untable was what I was looking for.

0 Karma

woodcock
Esteemed Legend

One way or another you are going to have to use a canonical list of sensors so do you have this? If not, you are going to have to create and maintain one using outputcsv and inputcsv. But there are other problems with your situation, too. Do you realize that the way you have constructed your search that any particular sensor could be counted in one, more than one, or even all of your rows? If sensor "X" reports once between 8AM and 9, twice between 9 and 10, 3 times between 10 and 11, it will be accounted in columns "1", "2", and "3". I strongly suspect that this is not your desire/intention and that you'd like sensor "X" only to be accounted in one row (perhaps the largest number) right? Let's backtrack and look at your real end-game. What exactly are you trying to measure?

0 Karma

vbarna
Engager

Thanks for your comment @woodcock. The counting behavior that you have described is exactly what I want. I want to model, for a time bucket of one hour, the number of detections that sensors normally detect. So, I assume that sensors will be indeed counted in several columns since they may count different number of people in different hours.

0 Karma

Richfez
SplunkTrust
SplunkTrust

I can see you've thought this through fairly well, have already tried some things and may only need a good nudge in the right direction.

A bit of searching found a nice Answers example that may lay out the root of a search like what you need. How to compare a list of hosts from a week to another. Obviously, it's not a direct answer for you but more of a indication on another way to approach the problem.

This way, you also let Splunk generate and keep a list of valid devices that have "checked in" in some way in a time period of your own choosing (1 week, 1 month, whatever you'd want).

0 Karma

alacercogitatus
SplunkTrust
SplunkTrust

Can you paste a sample event? It might give us a clue on how to count it.

Get Updates on the Splunk Community!

Earn a $35 Gift Card for Answering our Splunk Admins & App Developer Survey

Survey for Splunk Admins and App Developers is open now! | Earn a $35 gift card!      Hello there,  Splunk ...

Continuing Innovation & New Integrations Unlock Full Stack Observability For Your ...

You’ve probably heard the latest about AppDynamics joining the Splunk Observability portfolio, deepening our ...

Monitoring Amazon Elastic Kubernetes Service (EKS)

As we’ve seen, integrating Kubernetes environments with Splunk Observability Cloud is a quick and easy way to ...