Splunk Search

Add individual logs by field to produce a count

thenormalone
Path Finder

I have several different timers in the logs which are in the format below:

INFO: [timerName=TimerName.RestTransaction
        timerMessage="Transaction Success" or "Transaction Failure"
    timerDuration= time in milliseconds
    sourceClass=sourceClass][domain=example.com][user=exampleUser][correlation-id=uniqueId]

and one of the logs of the format

processedObject=processedObject[attribute1=ID, attribute2=VERSION, attribute3=TYPE1, attribute4=TYPE2, failure=null]

I want to add/subtract the different timerName events (with transaction success message) and the log message above by user and display a table with user and the count(percentage) associated.

I tried (Here I'm trying to get the percentage of the count by user in the end)

index="Exampleindex" (timerName=Timer1.RestTransaction) | stats count as Event1 by user
    | appendcols [ search index="Exampleindex" (timerName=Timer2.RestTransaction) | stats count as Event2 by user]
    | appendcols [ search index="Exampleindex" (timerName=Timer3.RestTransaction) | stats count as Event3 by user]
    | appendcols [ search index="Exampleindex" (timerName=Timer4.RestTransaction)  | stats count as Event4 by user]
    | appendcols [ search index="Exampleindex" ("processedObject=" AND "failureMetadata=null")| stats count as Event5 by user]
    | eval total1 = Event1 + Event2 - Event3 + Event4 - Event 5
    | eval total2 = Event1 + Event2 - Event3 + Event4
    | eval percentage = ((total1/total2) * 100)
    | fields count

I'm not entirely sure how to go about this, any help is appreciated!

0 Karma
1 Solution

to4kawa
Ultra Champion
index="Exampleindex" timerName=* OR ("processedObject=" AND "failure=null") 
| stats count(eval(timerName="Timer1.RestTransaction")) as Event1
, count(eval(timerName="Timer2.RestTransaction")) as Event2
, count(eval(timerName="Timer3.RestTransaction")) as Event3
, count(eval(timerName="Timer4.RestTransaction")) as Event4 
, count(eval(searchmatch("processedObject"))) as Event5 by user
| eval total2 = Event1 + Event2 - Event3 + Event4
| eval total1 = total2 - Event 5
| eval percentage = round((total1/total2) * 100,2)
| table user percentage

Hi, @thenormalone
stats can use eval.
how about this?

View solution in original post

to4kawa
Ultra Champion
index="Exampleindex" timerName=* OR ("processedObject=" AND "failure=null") 
| stats count(eval(timerName="Timer1.RestTransaction")) as Event1
, count(eval(timerName="Timer2.RestTransaction")) as Event2
, count(eval(timerName="Timer3.RestTransaction")) as Event3
, count(eval(timerName="Timer4.RestTransaction")) as Event4 
, count(eval(searchmatch("processedObject"))) as Event5 by user
| eval total2 = Event1 + Event2 - Event3 + Event4
| eval total1 = total2 - Event 5
| eval percentage = round((total1/total2) * 100,2)
| table user percentage

Hi, @thenormalone
stats can use eval.
how about this?

thenormalone
Path Finder

This works too but for Event5,

processedObject=processedObject[attribute1=ID, attribute2=VERSION, attribute3=TYPE1, attribute4=TYPE2, failure=null]

this is the log I'm trying to get a count of: So I specifically need failure=null in this log, that's why I'm trying ("processedObject=" AND "failure=null"). Reading up on searchmatch, I think it takes in a boolean argument. so I'm not sure how to go about fixing this.

0 Karma

thenormalone
Path Finder

I made the change to just do
count(eval(searchmatch("processedObject"))) as Sign by user

since my search already includes ("processedObject=" AND "failure=null") and this worked!

0 Karma

jkat54
SplunkTrust
SplunkTrust
 Index=example
 | eval rest1=if(timerName=="Timer1.RestTransaction",1,0)
 | eval rest2=if(timerName=="Timer2.RestTransaction",1,0)
 | eval rest3=if(timerName=="Timer3.RestTransaction",1,0)
 | eval rest4=if(timerName=="Timer4.RestTransaction",1,0)
 | eval rest5=if(searchmatch("processedObject=* AND "failureMetadata=null"),1,0)
 | where isnotnull(rest1) OR isnotnull(rest2) OR isnotnull(rest3) OR isnotnull(rest4) OR isnotnull(rest5)
 | stats sum(rest1) as r1 sum(rest2) as r2 sum(rest3) as r3 sum(rest4) as r4 sum(rest5) as r5 count as total by user
 | eval pctR1=(r1/total)*100
 | eval pctR2=(r2/total)*100
 | eval pctR3=(r3/total)*100
 | eval pctR4=(r4/total)*100
 | eval pctR5=(r5/total)*100
0 Karma

jkat54
SplunkTrust
SplunkTrust

You'll want to change the evals on the end of the search to have your adds and subtractions, but then you'll have to add/subtract their counts from the total count before getting a percentage too.... can be done but hard to type on my mobile!

0 Karma

thenormalone
Path Finder

my end goal is to have a table which has the user and his percentage

0 Karma

thenormalone
Path Finder

Thanks for taking the time to look at this. Appreciate it 🙂

Apologies for the amateur question but I'm getting a "The arguments to the 'searchmatch' function are invalid." with the following.

eval rest5=if(searchmatch('"processedObject=" AND "failureMetadata=null"'),1,0)

I've unsuccessfully tried to play with the different combinations

0 Karma

to4kawa
Ultra Champion
 | eval rest5=if(searchmatch("processedObject=") AND searchmatch("failureMetadata=null"),1,0)

try this.

0 Karma

thenormalone
Path Finder

unfortunately, I'm still getting the same error. FYI,

processedObject=processedObject[attribute1=ID, attribute2=VERSION, attribute3=TYPE1, attribute4=TYPE2, failure=null]

this is the log I'm trying to get a count of: I specifically need failure=null in this log, that's why I'm trying ("processedObject=" AND "failure=null"). Reading up on searchmatch, I think it takes in a boolean argument. so I'm not sure how to go about fixing this.

0 Karma
Get Updates on the Splunk Community!

Earn a $35 Gift Card for Answering our Splunk Admins & App Developer Survey

Survey for Splunk Admins and App Developers is open now! | Earn a $35 gift card!      Hello there,  Splunk ...

Continuing Innovation & New Integrations Unlock Full Stack Observability For Your ...

You’ve probably heard the latest about AppDynamics joining the Splunk Observability portfolio, deepening our ...

Monitoring Amazon Elastic Kubernetes Service (EKS)

As we’ve seen, integrating Kubernetes environments with Splunk Observability Cloud is a quick and easy way to ...