Splunk Search

Getting Average Number of Requests Per Hour

ten_yard_fight
Path Finder

I've read most (if not all) of the questions/answers related to getting an average count of hits per hour. I've experimented with some of the queries posted by fellow splunkers and for the most part they've worked when using small queries (i.e. charting the two fields Total Count and Average Count . However, I've concocted a somewhat lengthy search query that doesn't seem to work correctly when trying to find the Average Request Per Hour (AvgReqPerHour) column. Let me show you what I have here.

... | timechart span=1h count(status_code) AS Events,  count(eval(status_code>=200 AND status_code<=206)) AS SuccessfulRequests, count(eval(status_code>=300 AND status_code<=307)) AS RedirectedRequests, count(eval(status_code>=400 AND status_code <=505)) AS FailedRequests, dc(user_agent) AS TotalUsers, sum(file_size) AS TotalData, avg(file_size) AS AvgDataPerHour, avg(Events) AS AvgReqPerHour, avg(seconds) AS AvgResponseTimeSec

So, this search should display some useful columns for finding web related stats. It counts all status codes and gives the number of requests by column and gives me averages for data transferred per hour and requests per hour.

I hope someone else has done something similar and knows how to properly get the average requests per hour.

Tags (1)
0 Karma
1 Solution

martin_mueller
SplunkTrust
SplunkTrust

per_hour(foo) will sum up the values of foo for the bucket and then scale the sum as if the bucket were one hour long. If your bucket is ten minutes it will multiply by six, if your bucket is one day it will divide by 24.

If every event in your data represents one hit you can do something like this:

... | eval reqs = 1 | timechart span=24h per_hour(reqs) as AvgReqPerHour ...

View solution in original post

martin_mueller
SplunkTrust
SplunkTrust

Great. I've converted the last comment to an answer so you can mark it as accepted.

0 Karma

ten_yard_fight
Path Finder

Awesome !! This is exactly what I was looking for. Your explanation put it into better perspective. Thank You.

0 Karma

martin_mueller
SplunkTrust
SplunkTrust

per_hour(foo) will sum up the values of foo for the bucket and then scale the sum as if the bucket were one hour long. If your bucket is ten minutes it will multiply by six, if your bucket is one day it will divide by 24.

If every event in your data represents one hit you can do something like this:

... | eval reqs = 1 | timechart span=24h per_hour(reqs) as AvgReqPerHour ...

collier31200
Explorer

... | eval reqs = 1 | timechart span=24h per_hour(reqs) as AvgReqPerHour ...
Working correctly for me, but the average calcul is wrong when I only have 6hours in a day for example.
Does someone have an idea to only divided by available hours ? (6 for my example)

0 Karma

esset09
New Member

Wouldn't the simpler example below do the same thing?

... | timechart count span=1h ...

The time span for the entire query could be set using the time picker. Seemed to work for my use case, at least.

0 Karma

ten_yard_fight
Path Finder

How I interpret this, per_hour() only divides the total by hours in the span. That really isn't showing what the actual average is, right?

0 Karma

martin_mueller
SplunkTrust
SplunkTrust

No, avg() cannot predict what you had in mind. Take a look at per_hour().

0 Karma

ten_yard_fight
Path Finder

Yes, but if I increase the span to 1d shouldn't I then get the average count per hour? Or how does avg() know what time span I'm looking for?
(I meant to change the span to 1d)

0 Karma

martin_mueller
SplunkTrust
SplunkTrust

Your field Events right at the top of the timechart is your requests per hour, no?

0 Karma
Get Updates on the Splunk Community!

Earn a $35 Gift Card for Answering our Splunk Admins & App Developer Survey

Survey for Splunk Admins and App Developers is open now! | Earn a $35 gift card!      Hello there,  Splunk ...

Continuing Innovation & New Integrations Unlock Full Stack Observability For Your ...

You’ve probably heard the latest about AppDynamics joining the Splunk Observability portfolio, deepening our ...

Monitoring Amazon Elastic Kubernetes Service (EKS)

As we’ve seen, integrating Kubernetes environments with Splunk Observability Cloud is a quick and easy way to ...