Splunk Search

Aggregated HTTP Status Code per URL per a time bin/bucket

brokenboard525
Engager

Hi,

I have the following fields in logs on my proxy for backend services

_time -> timestamp
status_code -> http status code
backend_service_url -> app it is proxying

What I want to do is aggregate status codes by the minute per URL for each status code.
So sample output would look like:

timebackend-serviceStatus code 200Status code 201status code 202
10:00app1.com10 2
10:01app1.com 10 
10:01app2.com10  


Columns would be dynamic based on the available status codes in the timeframe I am searching.

I found lot of questions on aggregating all 200's into 2xx or total counts by URL but not this. Appreciate any suggestions on how to do this.

Thanks!

Labels (2)
Tags (2)
0 Karma
1 Solution

ITWhisperer
SplunkTrust
SplunkTrust
| bin _time span=1m
| stats count by _time backend_service_url status_code
| eval {status_code}=count
| fields - status_code count
| stats values(*) as * by _time backend_service_url

View solution in original post

0 Karma

ITWhisperer
SplunkTrust
SplunkTrust
| bin _time span=1m
| stats count by _time backend_service_url status_code
| eval {status_code}=count
| fields - status_code count
| stats values(*) as * by _time backend_service_url
0 Karma

brokenboard525
Engager

Right now!

What is the best visualization to plot such multi data sources?
It should illustrate the response codes from each back-end service as the time changes. 

0 Karma
Get Updates on the Splunk Community!

Threat Hunting Unlocked: How to Uplevel Your Threat Hunting With the PEAK Framework ...

WATCH NOWAs AI starts tackling low level alerts, it's more critical than ever to uplevel your threat hunting ...

Splunk APM: New Product Features + Community Office Hours Recap!

Howdy Splunk Community! Over the past few months, we’ve had a lot going on in the world of Splunk Application ...

Index This | Forward, I’m heavy; backward, I’m not. What am I?

April 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...