Splunk Dev

Response Time capture and count

kdulhan
Explorer

I have a below SPLUNK event:
ns=app1 Service='trigger1' id=100 ActNo='101' ServiceType='REST',ResponseCode='200',ResponseTime='322ms'

I want to extract all the events where ResponseTime>1000ms.

ns=app1 Service='trigger1' id=100 | stats count(eval(ResponseTime>"'500ms'")) as "Count SLA > 500 ms" works fine.

But when i try to search for the events with ResponseTIme>1000ms using
ns=app1 Service='trigger1' id=100 | stats count(eval(ResponseTime>"'1000ms'")) as "Count SLA > 1000 ms"

whereas I have the events with ResponseTime > 1000ms

I am able to search an event with ResponseTime="'1552ms'"

i.e. ns=app1 Service='trigger1' id=100 | ResponseTime="'1552ms'"

Thank you!

Tags (1)
0 Karma
1 Solution

somesoni2
Revered Legend

Try like this (getting response_time as number for easier mathematical comparison)

ns=app1 Service='trigger1' id=100 | rex field=ResponseTIme "'*(?<response_time>\d+)ms" | stats count(eval(response_time>1000)) as "Count SLA > 1000 ms"

View solution in original post

0 Karma

somesoni2
Revered Legend

Try like this (getting response_time as number for easier mathematical comparison)

ns=app1 Service='trigger1' id=100 | rex field=ResponseTIme "'*(?<response_time>\d+)ms" | stats count(eval(response_time>1000)) as "Count SLA > 1000 ms"
0 Karma

kdulhan
Explorer

Thank you.

Kindly respond to my field extraction query.

0 Karma
Get Updates on the Splunk Community!

Announcing Scheduled Export GA for Dashboard Studio

We're excited to announce the general availability of Scheduled Export for Dashboard Studio. Starting in ...

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics GA in US-AWS!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...