Deployment Architecture

response time

kunadkat
Explorer

I would like to put response time in three buckets. low < 1 second, medium < 2 second, high > 2 second. I would like to calculate % response time in low medium and high buckets.

The following is the query:

sourcetype="jboss" TOTAL SEARCH TIME CAREWEB | eventstats count as total |eval rp=EASYDOC_JBOSS_TIME/1000 | rangemap field=rp low=0-1.0 medium=1-2 high=2-100 | stats count by range

Eventhough I see total in the fields, I am not able to use it in the eval to calculate %

as per SLA I have to have produce a graph that shows that 99% of responsetime is less than 1 second

Thanks,

Tags (1)
0 Karma
1 Solution

kristian_kolb
Ultra Champion

You could try;

sourcetype="jboss" TOTAL SEARCH TIME CAREWEB 
| eval rp=EASYDOC_JBOSS_TIME/1000 
| rangemap field=rp low=0-1.0 medium=1-2 high=2-100 
| stats c AS TOTAL c(eval(range="low")) AS OK_COUNT c(eval(range="medium")) AS NOT_OK_COUNT c(eval(range="high")) AS REALLY_BAD_COUNT 
| eval SLA_OK_PERC = round((OK_COUNT / TOTAL *100),2) 
| eval SLA_BAD_PERC = round((NOT_OK_COUNT / TOTAL * 100), 2) 
| eval SLA_DISASTER_PERC = round((REALLY_BAD_COUNT / TOTAL *100),2)

I believe that changing 'stats' for 'timechart span=1d' will give you the results you want - see below. I haven't tried it though, as I have no good sample logs available.

sourcetype="jboss" TOTAL SEARCH TIME CAREWEB 
| eval rp=EASYDOC_JBOSS_TIME/1000 
| rangemap field=rp low=0-1.0 medium=1-2 high=2-100 
| timechart span=1d c AS TOTAL c(eval(range="low")) AS OK_COUNT c(eval(range="medium")) AS NOT_OK_COUNT c(eval(range="high")) AS REALLY_BAD_COUNT 
| eval SLA_OK_PERC = round((OK_COUNT / TOTAL *100),2) 
| eval SLA_BAD_PERC = round((NOT_OK_COUNT / TOTAL * 100), 2) 
| eval SLA_DISASTER_PERC = round((REALLY_BAD_COUNT / TOTAL *100),2)

Hope this helps,

Kristian

View solution in original post

kunadkat
Explorer

Kristian,

sourcetype="jboss" TOTAL SEARCH TIME CAREWEB
| eval rp=EASYDOC_JBOSS_TIME/1000
| rangemap field=rp low=0-1.0 medium=1-2 high=2-100
| stats c AS TOTAL c(eval(range="low")) AS OK_COUNT c(eval(range="medium")) AS NOT_OK_COUNT c(eval(range="high")) AS REALLY_BAD_COUNT
| eval SLA_OK_PERC = round((OK_COUNT / TOTAL *100),2)
| eval SLA_BAD_PERC = round((NOT_OK_COUNT / TOTAL * 100), 2)
| eval SLA_DISASTER_PERC = round((REALLY_BAD_COUNT / TOTAL *100),2)

This works but how do I get this stats per day?

0 Karma

dwaddle
SplunkTrust
SplunkTrust

There is also the percXX stats functions which compute the XXth percentile of a data set. This may (or equally may not) be a better approach to your measurement.

sourcetype="jboss" TOTAL SEARCH TIME CAREWEB 
| eval rp=EASYDOC_JBOSS_TIME/1000 
| timechart perc99(rp)

kristian_kolb
Ultra Champion

You could try;

sourcetype="jboss" TOTAL SEARCH TIME CAREWEB 
| eval rp=EASYDOC_JBOSS_TIME/1000 
| rangemap field=rp low=0-1.0 medium=1-2 high=2-100 
| stats c AS TOTAL c(eval(range="low")) AS OK_COUNT c(eval(range="medium")) AS NOT_OK_COUNT c(eval(range="high")) AS REALLY_BAD_COUNT 
| eval SLA_OK_PERC = round((OK_COUNT / TOTAL *100),2) 
| eval SLA_BAD_PERC = round((NOT_OK_COUNT / TOTAL * 100), 2) 
| eval SLA_DISASTER_PERC = round((REALLY_BAD_COUNT / TOTAL *100),2)

I believe that changing 'stats' for 'timechart span=1d' will give you the results you want - see below. I haven't tried it though, as I have no good sample logs available.

sourcetype="jboss" TOTAL SEARCH TIME CAREWEB 
| eval rp=EASYDOC_JBOSS_TIME/1000 
| rangemap field=rp low=0-1.0 medium=1-2 high=2-100 
| timechart span=1d c AS TOTAL c(eval(range="low")) AS OK_COUNT c(eval(range="medium")) AS NOT_OK_COUNT c(eval(range="high")) AS REALLY_BAD_COUNT 
| eval SLA_OK_PERC = round((OK_COUNT / TOTAL *100),2) 
| eval SLA_BAD_PERC = round((NOT_OK_COUNT / TOTAL * 100), 2) 
| eval SLA_DISASTER_PERC = round((REALLY_BAD_COUNT / TOTAL *100),2)

Hope this helps,

Kristian

kristian_kolb
Ultra Champion

Feel free to mark the question as answered. /k

0 Karma

kunadkat
Explorer

Worked. Thanks you very much

0 Karma

kristian_kolb
Ultra Champion

see update above.

/k

0 Karma

kunadkat
Explorer

Kristian,

It works, but this gives cumulative answer. Is it possible to get it per day?

Thanks for your help
Kalpesh

0 Karma
Get Updates on the Splunk Community!

Automatic Discovery Part 1: What is Automatic Discovery in Splunk Observability Cloud ...

If you’ve ever deployed a new database cluster, spun up a caching layer, or added a load balancer, you know it ...

Real-Time Fraud Detection: How Splunk Dashboards Protect Financial Institutions

Financial fraud isn't slowing down. If anything, it's getting more sophisticated. Account takeovers, credit ...

Splunk + ThousandEyes: Correlate frontend, app, and network data to troubleshoot ...

 Are you tired of troubleshooting delays caused by siloed frontend, application, and network data? We've got a ...