Archive

uniq command usage

Path Finder

Hi ,

i have a events based on such a flow :
every transaction id has 4 logpoints (logpoint is a field) :
request-in , request-out,response-in,response-out

Can anyone help ?

Tags (1)
0 Karma
1 Solution

SplunkTrust
SplunkTrust

Hi @shraddhamuduli,

Try this. You might need to change the logic in if based on your final requirement but the basic logic should work. Lets know if you need further assistance,

index=test sourcetype=trans|table txn_id,logpoint,statuscode
|stats list(logpoint) as logpoints,list(statuscode) as statuscodes by txn_id
|eval req_in=mvfind(logpoints, "Req-in"),req_out=mvfind(logpoints, "Req-Out")| eval res=mvzip(logpoints,statuscodes)
|mvexpand res|table txn_id,res,req_in,req_out
|eval res=split(res,",")|eval logpoint=mvindex(res,0),statuscode=mvindex(res,1)|fields - res|fillnull value="NA"
|eval platform_failure=if(req_in!="NA" AND req_out=="NA" AND (logpoint=="Response-out" AND statuscode=500),"Yes","No")
|eval application_failure=if(((req_in!="NA" AND req_out=="NA") AND (logpoint=="Response-out" AND (match(statuscode,"4*") OR statuscode=500)))
OR ((logpoint=="Response-in" AND statuscode=200) AND (logpoint=="Response-out" AND statuscode=500)),"Yes","No")
|where application_failure="Yes" OR platform_failure="Yes"

Sample data used is

1,Req-in,200
1,Req-Out,200
1,Response-in,200
1,Response-out,200
2,Req-in,200
2,Req-Out,200
2,Response-in,200
2,Response-out,200
3,Req-in,200
3,Req-Out,200
3,Response-in,200
3,Response-out,500
4,Req-in,200
4,Response-in,200
4,Response-out,200
5,Req-in,200
5,Req-Out,200
5,Response-in,200
5,Response-out,500

View solution in original post

0 Karma

Esteemed Legend

Try this:

| makeresults 
| eval raw="1,Req_In,200 1,Req_Out,200 1,Response_In,200 1,Response_Out,200 2,Req_In,200 2,Response_In,200 2,Response_Out,400 3,Req_In,200 3,Req_Out,200 3,Response_In,200 3,Response_Out,500 4,Req_In,200 4,Response_In,200 4,Response_Out,500 5,Req_In,200 5,Req_Out,200 5,Response_In,200 5,Response_Out,500"
| makemv raw
| mvexpand raw
| rex field=raw "^(?<txn_id>[^,]*),(?<logpoInt>[^,]*),(?<statuscode>[^,]*)$"
| streamstats count AS time_offset_seconds
| eval _time = _time + time_offset_seconds
| fields - time_offset_seconds
| rename raw AS _raw

| rename COMMENT AS "EverythIng above generates sample event data; everythIng below is your solution"

| sort 0 _time
| eval {logpoInt} = statuscode
| stats list(*) AS * BY txn_id
| eval Platform_Failure = if((isnotnull(Req_In) AND isnull(Req_Out) AND Response_Out=="500"), "1", "0")
| eval Application_Error = if((isnotnull(Req_In) AND isnull(Req_Out) AND (Response_Out>=400 AND NOT Response_Out=="500")) OR (Reseponse_In=="200" AND Response_Out=="500"), "1", "0")
| multireport 
    [ where Platform_Failure=="1" | stats values(txn_id) count AS Platform_Failure ]
    [ where Application_Error=="1" | stats values(txn_id) count AS Application_Error ]
| fields - txn_id
0 Karma

Path Finder

Hi Woodcock ,

Good morning.
Multireport is somehow not working . It displays results for the first pipe i.e platform failure but doesnt show any value for the second pipe application failure

0 Karma

Path Finder

Hi Woodcock

Could you please help me why my failure and error are not showing up in percentage .I'm making a stack bar graph .

index=idx_apix sourcetype="kafka:topicEvent"
| fillnull status-code VALUE="NA"
| table transaction-id,logpoint,status-code ,_time
| stats list(logpoint) as logpoints,list(status-code) as statuscodes, list(_time) as time by transaction-id
| eval req_in=mvfind(logpoints, "request-in"),req_out=mvfind(logpoints, "request-out")
| eval res=mvzip(logpoints,statuscodes)
| mvexpand res
| table transaction-id,res,req_in,req_out ,_time
| eval res=split(res,",")
| eval logpoint=mvindex(res,0),statuscode=mvindex(res,1)
| fillnull value="NA"
| eval platform_failure=if(req_in!="NA" AND req_out=="NA" AND (logpoint=="response-out" AND statuscode=500),"Yes","No")
| eval application_failure=if( (req_in!="NA" AND req_out=="NA") AND ( (res="response-out,503" OR res="response-out,400" OR res="response-out,401" OR res="response-out,403" OR res="response-out,404" OR res="response-out,405" OR res="response-out,409" OR res="response-out,410" OR res="response-out,412") OR (res="response-in,200" OR res="response-out,500")), "Yes","No")
| timechart span=15m count as total, count(eval(platform_failure="Yes")) as Failure , count(eval(application_failure="Yes")) as Error
| eval Success=total-(Failure+Error)
| eval Success=round((Success/total)*100,2)
| eval Failure=round((Failure/total)*100,2)
| eval Error=round((Error/total)*100,2)
| fields time,Success,Failure,Error

0 Karma

SplunkTrust
SplunkTrust

Hi @shraddhamuduli,

Try this. You might need to change the logic in if based on your final requirement but the basic logic should work. Lets know if you need further assistance,

index=test sourcetype=trans|table txn_id,logpoint,statuscode
|stats list(logpoint) as logpoints,list(statuscode) as statuscodes by txn_id
|eval req_in=mvfind(logpoints, "Req-in"),req_out=mvfind(logpoints, "Req-Out")| eval res=mvzip(logpoints,statuscodes)
|mvexpand res|table txn_id,res,req_in,req_out
|eval res=split(res,",")|eval logpoint=mvindex(res,0),statuscode=mvindex(res,1)|fields - res|fillnull value="NA"
|eval platform_failure=if(req_in!="NA" AND req_out=="NA" AND (logpoint=="Response-out" AND statuscode=500),"Yes","No")
|eval application_failure=if(((req_in!="NA" AND req_out=="NA") AND (logpoint=="Response-out" AND (match(statuscode,"4*") OR statuscode=500)))
OR ((logpoint=="Response-in" AND statuscode=200) AND (logpoint=="Response-out" AND statuscode=500)),"Yes","No")
|where application_failure="Yes" OR platform_failure="Yes"

Sample data used is

1,Req-in,200
1,Req-Out,200
1,Response-in,200
1,Response-out,200
2,Req-in,200
2,Req-Out,200
2,Response-in,200
2,Response-out,200
3,Req-in,200
3,Req-Out,200
3,Response-in,200
3,Response-out,500
4,Req-in,200
4,Response-in,200
4,Response-out,200
5,Req-in,200
5,Req-Out,200
5,Response-in,200
5,Response-out,500

View solution in original post

0 Karma

Path Finder

Hi Renjith, Could you please help me why my failure and error are not showing up in percentage .I'm making a stack bar graph .

index=idx_apix sourcetype="kafka:topicEvent"
| fillnull status-code VALUE="NA"
| table transaction-id,logpoint,status-code ,_time
| stats list(logpoint) as logpoints,list(status-code) as statuscodes, list(_time) as time by transaction-id
| eval req_in=mvfind(logpoints, "request-in"),req_out=mvfind(logpoints, "request-out")
| eval res=mvzip(logpoints,statuscodes)
| mvexpand res
| table transaction-id,res,req_in,req_out ,_time
| eval res=split(res,",")
| eval logpoint=mvindex(res,0),statuscode=mvindex(res,1)
| fillnull value="NA"
| eval platform_failure=if(req_in!="NA" AND req_out=="NA" AND (logpoint=="response-out" AND statuscode=500),"Yes","No")
| eval application_failure=if( (req_in!="NA" AND req_out=="NA") AND ( (res="response-out,503" OR res="response-out,400" OR res="response-out,401" OR res="response-out,403" OR res="response-out,404" OR res="response-out,405" OR res="response-out,409" OR res="response-out,410" OR res="response-out,412") OR (res="response-in,200" OR res="response-out,500")), "Yes","No")
| timechart span=15m count as total, count(eval(platform_failure="Yes")) as Failure , count(eval(application_failure="Yes")) as Error
| eval Success=total-(Failure+Error)
| eval Success=round((Success/total)*100,2)
| eval Failure=round((Failure/total)*100,2)
| eval Error=round((Error/total)*100,2)
| fields time,Success,Failure,Error

0 Karma

SplunkTrust
SplunkTrust

Are you getting these values after this line | timechart span=15m count as total, count(eval(platform_failure="Yes")) as Failure , count(eval(application_failure="Yes")) as Error ?

0 Karma

Path Finder

No Renjith, I'm not receiving nay result. Cant figure out whats wrong

0 Karma

SplunkTrust
SplunkTrust

Ok, then is it possible to find out until which step you are getting data? may be removing the steps one by one.

0 Karma

Path Finder

Hi @renjith.nair , thankyou so much for this amazing superfpowerful formula.
Can you please just help me on a small stiff. I'm building a timechart for this for last 4 hours.
I suppose we are not passing _time field into the query. So, i mvzip _time into time, please review my query .

index=idx_ sourcetype IN ("k") component=*
| fillnull status-code VALUE="NA"
| table transaction-id,logpoint,status-code ,component,_time
| stats list(logpoint) as logpoints,list(status-code) as statuscodes,list(_time) as time by transaction-id ,component
| eval req_in=mvfind(logpoints, "request-in"),req_out=mvfind(logpoints, "request-out")
| eval res=mvzip(logpoints,statuscodes)
| eval res=mvzip(res,time)
| mvexpand res
| table transaction-id,res,req_in,req_out ,component
| eval res=split(res,",")
| eval logpoint=mvindex(res,0),statuscode=mvindex(res,1) ,time=mvindex(res,2)
| fillnull value="NA"
| eval platform_failure=if(req_in!="NA" AND req_out=="NA" AND (logpoint=="response-out" AND statuscode=500),"1","0")
| where platform_failure="1"
| eval _time=time
| timechart span=1h count as Count by component

But this gives just last hour.
Do you have any idea ?

0 Karma

Path Finder

Hi Renjith , does it work?

0 Karma

SplunkTrust
SplunkTrust

I didnt get what you mean but what is suggested is , execute your search parts by parts and see where the data is missing , for eg.

first run index=idx_apix sourcetype="kafka:topicEvent" and if you get result, then add | fillnull status-code VALUE="NA" and then | table transaction-id,logpoint,status-code ,_time and so on ...until you dont get result. Then you could identify which step is not resulting data and troubleshoot

0 Karma

Path Finder

Hi Renjith , till the code before using timechart , my results are coming up fine , like platform-failure=yes or application-failure=no

now i'm trying to count(platform-failure=yes) as failure
count (application-failure=yes) as error
and rest as success i.e success=total-(error+failure)

and take percentage of success, error and failure and display in a stacked bar graph

0 Karma

Path Finder

Hi Renjith,

I'm receiving data till this :

index=idx_apix sourcetype="kafka:topicEvent"
| fillnull status-code VALUE="NA"
| table transaction-id,logpoint,status-code ,_time
| stats list(logpoint) as logpoints,list(status-code) as statuscodes, list(_time) as time by transaction-id
| eval req_in=mvfind(logpoints, "request-in"),req_out=mvfind(logpoints, "request-out")
| eval res=mvzip(logpoints,statuscodes)
| mvexpand res
| table transaction-id,res,req_in,req_out ,_time
| eval res=split(res,",")
| eval logpoint=mvindex(res,0),statuscode=mvindex(res,1)
| fillnull value="NA"
| eval platform_failure=if(req_in!="NA" AND req_out=="NA" AND (logpoint=="response-out" AND statuscode=500),"Yes","No")
| eval application_failure=if( (req_in!="NA" AND req_out=="NA") AND ( (res="response-out,503" OR res="response-out,400" OR res="response-out,401" OR res="response-out,403" OR res="response-out,404" OR res="response-out,405" OR res="response-out,409" OR res="response-out,410" OR res="response-out,412") OR (res="response-in,200" OR res="response-out,500")), "Yes","No")

0 Karma

SplunkTrust
SplunkTrust

Hi @shraddhamuduli,
Just for clarification, you have accepted the answer and then taken -2 points. So was there anything missing here ? Or why was it "flagged" ? Bit confused 🙂

0 Karma

Path Finder

Hi Renjith, Pls delete your answer post, thats what i want ..i am unable to delete this whole post .

0 Karma

SplunkTrust
SplunkTrust

Why do you want to delete it? If it's not the answer you are looking for , then just "unaccept" it as I dont see any sensitive data in there.

0 Karma
State of Splunk Careers

Access the Splunk Careers Report to see real data that shows how Splunk mastery increases your value and job satisfaction.

Find out what your skills are worth!