Splunk Search

Calculate the average of some field per some time period per some other field(s)

frbuser
Path Finder

Query

 index::dlp 
    | bucket _time span=1d 
    | stats count(EVENT_DESCRIPTION) AS "Count" BY _time,User_Name,EVENT_TYPE,EVENT_DESCRIPTION 
    | stats median(Count) AS "Median" BY _time,EVENT_TYPE

I am trying to calculate the average or median number of DLP events per user per day for each different type of events. I don't think my query is correct as some of the numbers don't make sense.

I don't actually want to see the average number for each user, I just want to calculate the statistic for all users. I don't know if that makes sense. For example if there are 12 users and 3 types of events, I want to know on day 1 what the average number of events for each event type would be per user. But the result would only show 3 numbers which are the averages for each event type. So if they results were:

  • Send Mail: 10
  • Upload: 2
  • Download:5

I would interpret this as on day 1, each user had an average of 10 send mail events etc.

Ideally I would like to calculate this for any time frame.

0 Karma

to4kawa
Ultra Champion
index::dlp 
| bucket _time span=1d 
| stats count(EVENT_DESCRIPTION) AS "Count" BY _time,User_Name,EVENT_TYPE,EVENT_DESCRIPTION 
| eventstats sum(Count) as Count by  _time,EVENT_TYPE,EVENT_DESCRIPTION 
| stats median(Count) AS "Median" BY _time,EVENT_TYPE
0 Karma
Get Updates on the Splunk Community!

Automatic Discovery Part 1: What is Automatic Discovery in Splunk Observability Cloud ...

If you’ve ever deployed a new database cluster, spun up a caching layer, or added a load balancer, you know it ...

Real-Time Fraud Detection: How Splunk Dashboards Protect Financial Institutions

Financial fraud isn't slowing down. If anything, it's getting more sophisticated. Account takeovers, credit ...

Splunk + ThousandEyes: Correlate frontend, app, and network data to troubleshoot ...

 Are you tired of troubleshooting delays caused by siloed frontend, application, and network data? We've got a ...