Splunk Search

How to use addcoltotals base on one column or count the total on a accumulate result

phamxuantung
Communicator

Hello,

I have an alert that output a csv file that look like this

PersonNumber_of_loginLogin_fail
Person A1 
Person B62
Person C41

 

I run this search everyday and append upon the csv file. Upon a few days it'll look like this

PersonNumber_of_loginLogin_fail
Person A1 
Person B62
Person C41
Person A21
Person B71
Person C4 
.....  

 

Now at the end of the month, I want to calculate the total number of login and number of login fail for each person, and also to calculate the percentage of login fail for each one. As far as I know addcoltotals don't have "by" like stats do (like stats count by Person). I want my final table to look like this

PersonNumber_of_loginLogin_fail
Person A3 1
Person B133
Person C81


So how do I go about this?

Labels (3)
0 Karma

gcusello
Legend

Hi @phamxuantung,

At first you should sare the search you're using to generate the dayly results so I could update it.

Anyway you can use the same running on a month instead of a day and add an eval at the end to calculate the percentage,

so if e.g. you're speaking of wineventlog, the condition is EventCode=4624 or EventCode=4625, in this case you could run something like this:

index=wineventlog EventCode IN ("4624","4625")
| stats count AS Number_of_login count(eval(EventCode="4625")) AS Login_fail BY user
| eval perc=Login_fail/Number_of_login*100

Ciao.

Giuseppe

0 Karma

phamxuantung
Communicator

Hello @gcusello ,

My search query is pretty simple like the one you post. The data that I provide is for example purpose only, but the output is also similar to it. The problems that I encounter is that our data retention policy is very short (around 2 weeks), and the number of events each day can reach to a few millions event. Calculate on that data set (for report purpose) is
1. Will not correct (don't have enough data for a full month)

2. Too large of a dataset will tank our system, and our is strain enough already

So my solution is to accumulate result each day to lessen the stress of our system and also to save the data that will be delete.

My exact search go something like this

 

index=main  sourcetype=dbx2 RESPCODE=0
|stats count as Total by ACQUIRER
|join type=left ACQUIRER
[search index=main sourcetype=dbx2 RESPCODE=0 
|eval length = len(POS_ENTRY_CODE)
|search POS_ENTRY_CODE=8* AND length=2 
|stats count as fallback by ACQUIRER]

Pretty simple and run on Yesterday

 

0 Karma

gcusello
Legend

Hi @phamxuantung,

the easiest way to proceed is to save the results of a dayly execution in a summary index and use if for the monthly report.

You can do this creating a summary index and running your search every day adding at the end the command

| collect index=your_summary_index

then you can run the monthly search on the summary index that will contain the fields in the generating schedule  search.

Ciao.

Giuseppe

0 Karma
Get Updates on the Splunk Community!

Splunk Forwarders and Forced Time Based Load Balancing

Splunk customers use universal forwarders to collect and send data to Splunk. A universal forwarder can send ...

NEW! Log Views in Splunk Observability Dashboards Gives Context From a Single Page

Today, Splunk Observability releases log views, a new feature for users to add their logs data from Splunk Log ...

Last Chance to Submit Your Paper For BSides Splunk - Deadline is August 12th!

Hello everyone! Don't wait to submit - The deadline is August 12th! We have truly missed the community so ...