Splunk Enterprise

index=** source_type=** cf_app_name=*** api_call="*"

Scorpion
New Member

index=**** source_type=** cf_app_name=** api_call="*" | where like (api_call, "%xyz%")
| table _time,response_code, duration,api_call | bin _time span=1d | appendpipe [ | chart count over api_call by response_code ] | stats sum(*) as *,count as Number_Of_Calls,perc95(duration) as perc95_duration,avg(duration) as avg_duration by api_call
| eval perc95_duration=round(perc95_duration,3),avg_duration=round(avg_duration,3)
| sort - _time | fields - duration,response_code | table api_call,_time,*,Number_Of_Calls

 

 

my _time column is always blank. Either _time or response codes are filled in.

 

 

 

Labels (1)
Tags (4)
0 Karma

venkatasri
SplunkTrust
SplunkTrust

Hi @Scorpion 

Can you try this, when you did stats the _time has gong as it associated to event. You have to aggregation functions to get the _time or group by _time, api_call. Here i have used aggregate function, renamed _time to time, And converted to human readable.

index=**** source_type=** cf_app_name=** api_call="*" 
| where like (api_call, "%xyz%") 
| table _time,response_code, duration,api_call 
| bin _time span=1d 
| appendpipe 
    [| chart count over api_call by response_code ] 
| stats sum(*) as *,count as Number_Of_Calls,perc95(duration) as perc95_duration,avg(duration) as avg_duration, earliest(_time) as time by api_call
| convert ctime(time) as time
| eval perc95_duration=round(perc95_duration,3),avg_duration=round(avg_duration,3) 
| sort - time 
| fields - duration,response_code 
| table api_call,time,*,Number_Of_Calls

---

An upvote would be appreciated and Accept solution if this reply helps!

0 Karma
Career Survey
First 500 qualified respondents will receive a $20 gift card! Tell us about your professional Splunk journey.
Get Updates on the Splunk Community!

Thanks for the Memories! Splunk University, .conf25, and our Community

Thank you to everyone in the Splunk Community who joined us for .conf25, which kicked off with our iconic ...

Data Persistence in the OpenTelemetry Collector

This blog post is part of an ongoing series on OpenTelemetry. What happens if the OpenTelemetry collector ...

Introducing Splunk 10.0: Smarter, Faster, and More Powerful Than Ever

Now On Demand Whether you're managing complex deployments or looking to future-proof your data ...