Splunk Search

count Events per minute but only for same users

summerura
Explorer

Hi Splunkers, 

some examples from our logs.. 


[Time:11:03:01] [Function:upload] [User:aaa]
[Time:11:03:10] [Function:upload] [User:aaa]
[Time:11:03:15] [Function:upload] [User:ccc]
[Time:11:05:30] [Function:upload] [User:aaa]

and

my search
| bin _time span=1m 
| dedup _time

 

I want to count the events per 1min. or let's say deduplicate if other events are in 1 minute but only for same users. 
and expect the result like this that user "CCC"  don't filtered. 

[Time:11:03:01] [Function:upload] [User:aaa]
[Time:11:03:30] [Function:upload] [User:ccc]
[Time:11:05:10] [Function:upload] [User:aaa]

 

But my search filtered also other user's events and the result is like that. 


[Time:11:03:01] [Function:upload] [User:aaa]
[Time:11:05:30] [Function:upload] [User:aaa]

This is only example logs that means users are not only two but over hunderes. 

Can somebody help me how should i search? 

 

Thanks 

Labels (1)
Tags (3)
0 Karma

richgalloway
SplunkTrust
SplunkTrust

The use of bin and dedup together means only one event in each minute will be returned.  That's not going to give the desired results.  Try stats, instead.

your search
| stats count by _time

That will give the number of events for each minute.   To get the number of distinct users for each minute, try

your search
| stats dc(user) as users by _time

 

---
If this reply helps you, Karma would be appreciated.
0 Karma
Get Updates on the Splunk Community!

[Puzzles] Solve, Learn, Repeat: Dynamic formatting from XML events

This challenge was first posted on Slack #puzzles channelFor a previous puzzle, I needed a set of fixed-length ...

Enter the Agentic Era with Splunk AI Assistant for SPL 1.4

  🚀 Your data just got a serious AI upgrade — are you ready? Say hello to the Agentic Era with the ...

Stronger Security with Federated Search for S3, GCP SQL & Australian Threat ...

Splunk Lantern is a Splunk customer success center that provides advice from Splunk experts on valuable data ...