Splunk Dev

Response Time capture and count

kdulhan
Explorer

I have a below SPLUNK event:
ns=app1 Service='trigger1' id=100 ActNo='101' ServiceType='REST',ResponseCode='200',ResponseTime='322ms'

I want to extract all the events where ResponseTime>1000ms.

ns=app1 Service='trigger1' id=100 | stats count(eval(ResponseTime>"'500ms'")) as "Count SLA > 500 ms" works fine.

But when i try to search for the events with ResponseTIme>1000ms using
ns=app1 Service='trigger1' id=100 | stats count(eval(ResponseTime>"'1000ms'")) as "Count SLA > 1000 ms"

whereas I have the events with ResponseTime > 1000ms

I am able to search an event with ResponseTime="'1552ms'"

i.e. ns=app1 Service='trigger1' id=100 | ResponseTime="'1552ms'"

Thank you!

Tags (1)
0 Karma
1 Solution

somesoni2
Revered Legend

Try like this (getting response_time as number for easier mathematical comparison)

ns=app1 Service='trigger1' id=100 | rex field=ResponseTIme "'*(?<response_time>\d+)ms" | stats count(eval(response_time>1000)) as "Count SLA > 1000 ms"

View solution in original post

0 Karma

somesoni2
Revered Legend

Try like this (getting response_time as number for easier mathematical comparison)

ns=app1 Service='trigger1' id=100 | rex field=ResponseTIme "'*(?<response_time>\d+)ms" | stats count(eval(response_time>1000)) as "Count SLA > 1000 ms"
0 Karma

kdulhan
Explorer

Thank you.

Kindly respond to my field extraction query.

0 Karma
Get Updates on the Splunk Community!

Automatic Discovery Part 1: What is Automatic Discovery in Splunk Observability Cloud ...

If you’ve ever deployed a new database cluster, spun up a caching layer, or added a load balancer, you know it ...

Real-Time Fraud Detection: How Splunk Dashboards Protect Financial Institutions

Financial fraud isn't slowing down. If anything, it's getting more sophisticated. Account takeovers, credit ...

Splunk + ThousandEyes: Correlate frontend, app, and network data to troubleshoot ...

 Are you tired of troubleshooting delays caused by siloed frontend, application, and network data? We've got a ...