Splunk Search

How to calculate count based on the condition?

csahoo
Explorer

Hi ,

i want to calculate count based on the condition , like in the below query

if the event is 'sync' then the 'failed' count should fetch from
source="*gps-request-processor-test*"
if the event is 'Async' then the 'failed' count should fetch from
source="*gps-external-processor-test*" OR source="*gps-artifact-processor-test*"


index="*dockerlogs*" source="*gps-request-processor-test*" OR source="*gps-external-processor-test*" OR source="*gps-artifact-processor-test*" event="*Request" documentType="*" OR labelType="*"
| eval LabelType=coalesce(labelType, documentType)
| eval event = case (like(event,"%Sync%"),"Sync",like(event,"%Async%"),"Async")
| rex mode=sed "s/1067/Windrunner/g" field=sourceNodeCode
| rex mode=sed "s/531/SFS/g" field=sourceNodeCode
| rex mode=sed "s/EUROPE_MIDDLE_EAST_AFRICA/EMEA/g" field=geoCode
| eval Geo=geoCode,Node=sourceNodeCode
| eval syncelapsed=if(source like "%gps-request-processor%",elapsedTime,null())
| eval asyncelapsed=if(source like "%gps-external-processor%" OR source like "%gps-artifact-processor%",elapsedTime,null())
| stats count(eval(status="Received" AND source like "%gps-request-processor%" )) as received count(eval(deliveryStatus="Success")) as delivered
count(eval(status="Failed")) as failed
avg(syncelapsed) as syncelapsedtime avg(asyncelapsed) as asyncelapsedtime avg(deliveryElapsedTime) as deliverytime by Node Geo LabelType event

Labels (4)
0 Karma

yuanliu
SplunkTrust
SplunkTrust

Where is the question? Does your code give what you wanted? If not, what is the output and why is that output what you wanted? (I.e., explain your requirements. Given the complexity of your sample code, you should also explain what are the data like. In particular, what constitutes "sync", what constitutes "async".) Further more, your opening requirement is "count based on the condition". (In Splunk lingo, count with groupby.) But that stats command makes it very unclear what kind of "count" you want to base on those conditions.

If I must take a blind shot, this is your effort to discern the source groups:

| eval syncelapsed=if(source like "%gps-request-processor%",elapsedTime,null())
| eval asyncelapsed=if(source like "%gps-external-processor%" OR source like "%gps-artifact-processor%",elapsedTime,null())

And this is your definition of "sync" vs "async"

| eval event = case (like(event,"%Sync%"),"Sync",like(event,"%Async%"),"Async")

If my mind-reading is correct, your requirement is basically a filter based on these fields before performing stats. Something like

| where isnotnull(syncelapsed) AND event == "Sync" OR isnotnull(asyncelapsed) AND event == "Async"

This is an extremely literal interpretation of your opening requirement. I get the feeling that there can be many alternatives to make this more efficient. But if we go with this literal approach, your search can be

index="*dockerlogs*" source="*gps-request-processor-test*" OR source="*gps-external-processor-test*" OR source="*gps-artifact-processor-test*" event="*Request" documentType="*" OR labelType="*"
| eval event = case (like(event,"%Sync%"),"Sync",like(event,"%Async%"),"Async")
| eval syncelapsed=if(source like "%gps-request-processor%",elapsedTime,null())
| eval asyncelapsed=if(source like "%gps-external-processor%" OR source like "%gps-artifact-processor%",elapsedTime,null())
| where isnotnull(syncelapsed) AND event == "Sync" OR isnotnull(asyncelapsed) AND event == "Async" ``` remove unnecessary events as early as possible ```
| eval LabelType=coalesce(labelType, documentType)
| rex mode=sed "s/1067/Windrunner/g" field=sourceNodeCode
| rex mode=sed "s/531/SFS/g" field=sourceNodeCode
| rex mode=sed "s/EUROPE_MIDDLE_EAST_AFRICA/EMEA/g" field=geoCode
| eval Geo=geoCode,Node=sourceNodeCode
| stats count(eval(status="Received" AND source like "%gps-request-processor%" )) as received count(eval(deliveryStatus="Success")) as delivered
count(eval(status="Failed")) as failed
avg(syncelapsed) as syncelapsedtime avg(asyncelapsed) as asyncelapsedtime avg(deliveryElapsedTime) as deliverytime by Node Geo LabelType event

Here, I moved independent command groups around so the search can have less data to deal with. Hope this helps.

0 Karma

csahoo
Explorer

Let me simplifie my requirement , Below is my query

index="*dockerlogs*" source="*gps-request-processor-test*" OR source="*gps-external-processor-test*" OR source="*gps-artifact-processor-test*" event="*Request"
| eval LabelType=coalesce(labelType, documentType)
| eval event = case (like(event,"%Sync%"),"Sync",like(event,"%Async%"),"Async")
| stats count(eval(status="Failed")) as failed by sourceNodeCode geoCode LabelType event

where as the
source : - is my application name
event :- Type of request whether synchronous request or Asynchronous request
labeltype : - Different type of label
sourcenodecode and geocode :- is the shopcode and shopregion from where the label is requested


Now i want to find the failed request count based on sourceNodeCode, geoCode, LabelType, event

where in case of synchronous request or event the failed count should fetch from '*gps-request-processor-test*' application

in case of asynchronous request or event the failed count should fetch from "*gps-external-processor-test*" OR "*gps-artifact-processor-test*" application

The output should look something similar to this attached o/p where the failed count should fetch from different application based on the event type.

0 Karma

yuanliu
SplunkTrust
SplunkTrust

So my mind-reading is right on spot😉 The above code should work.

In the simplified scenario, there is no pre-definition of sync- or async-lapsed which is based on source group, so you'll make source group explicit in the where command.

index="*dockerlogs*" source="*gps-request-processor-test*" OR source="*gps-external-processor-test*" OR source="*gps-artifact-processor-test*" event="*Request"
| eval LabelType=coalesce(labelType, documentType)
| eval event = case (like(event,"%Sync%"),"Sync",like(event,"%Async%"),"Async")
| where event == "Sync" AND source like "%gps-request-processor%" OR event == "Async" AND (source like "%gps-external-processor%" OR source like "%gps-artifact-processor%")
| stats count(eval(status="Failed")) as failed by sourceNodeCode geoCode LabelType event
0 Karma

csahoo
Explorer
index="*dockerlogs*" source="*gps-request-processor-test*" OR source="*gps-external-processor-test*" OR source="*gps-artifact-processor-test*" event="*Request"
| eval LabelType=coalesce(labelType, documentType)
| eval event = case (like(event,"%Sync%"),"Sync",like(event,"%Async%"),"Async")
| where event == "Sync" AND source like "%gps-request-processor%" OR event == "Async" AND (source like "%gps-external-processor%" OR source like "%gps-artifact-processor%")
| stats count(eval(status="Received")) as received count(eval(status="Failed")) as failed by sourceNodeCode geoCode LabelType event


suppose i need to calculate the count of both received and failed but i want this where clause to be applicable while calculating the failed count only, so in that case what should be my query.

0 Karma

yuanliu
SplunkTrust
SplunkTrust

As I mentioned in your new thread, you just exclude Failed from the restrictive filter, i.e.,

index="*dockerlogs*" source="*gps-request-processor-test*"
OR source="*gps-external-processor-test*"
OR source="*gps-artifact-processor-test*" event="*Request" | eval LabelType=coalesce(labelType, documentType) | eval event = case (like(event,"%Sync%"),"Sync",like(event,"%Async%"),"Async") | where status != "Failed" AND (event == "Sync" AND source like "%gps-request-processor%"
OR event == "Async" AND (source like "%gps-external-processor%"
OR source like "%gps-artifact-processor%")) | stats count(eval(status=="Failed")) as failed by sourceNodeCode geoCode LabelType event
0 Karma
Get Updates on the Splunk Community!

Monitoring MariaDB and MySQL

In a previous post, we explored monitoring PostgreSQL and general best practices around which metrics to ...

Financial Services Industry Use Cases, ITSI Best Practices, and More New Articles ...

Splunk Lantern is a Splunk customer success center that provides advice from Splunk experts on valuable data ...

Splunk Federated Analytics for Amazon Security Lake

Thursday, November 21, 2024  |  11AM PT / 2PM ET Register Now Join our session to see the technical ...