Splunk Search

graphing a logs per second value

acidkewpie
Path Finder

I'm using this query to graph how many web requests are being logged per second:

index="bigip_ltm" (event=HTTP_REQUEST OR event=HTTP_RESPONSE) client_ip=1.2.3.4 | timechart count(event) by event

But, in line with many questions here, the count() is graphed over a time interval derived from the search period. I've tried many permutations of span=1s, bucket commands etc, but I can't work out how to plot an average one second value over whatever period of time is represented on the graph.

In this question http://splunk-base.splunk.com/answers/46978/average-field-value-per-second the "per_second" data is in the logs, but i want a per_second of the count of the number of logs, so one step further removed.

Tags (3)
0 Karma
1 Solution

acidkewpie
Path Finder

Yeah, that's pretty useful. I thought there needed to be another aggregation stage but couldn't work out what it might be.

I've now got this

index="bigip_ltm" (event=HTTP_REQUEST OR event=HTTP_RESPONSE) client_ip=1.2.3.4 | timechart count by event | timechart per_second(HTTP_REQUEST) per_second(HTTP_RESPONSE)

I don't like having to field values end up as static field names, but I presume that's pretty much unavoidable? There's no way to graph all the "event" values implicitly?

Either way, that's got me what I asked for! thanks!

0 Karma
Get Updates on the Splunk Community!

New Year, New Changes for Splunk Certifications

As we embrace a new year, we’re making a small but important update to the Splunk Certification ...

Stay Connected: Your Guide to January Tech Talks, Office Hours, and Webinars!

What are Community Office Hours? Community Office Hours is an interactive 60-minute Zoom series where ...

[Puzzles] Solve, Learn, Repeat: Reprocessing XML into Fixed-Length Events

This challenge was first posted on Slack #puzzles channelFor a previous puzzle, I needed a set of fixed-length ...