Our error logs are indexed by splunk, and I would like to pull some statistics from this. I want something like an aggregated count from each log source, bucket:ed into say 10 minute intervals, reported by the last 60 minutes.
For instance, something like this
Logging source | 60 minutes | 50 minutes | 40 minutes | 30 minutes | 20 minutes | 10 minutes
Method 1 | 5 | 6 | 10 | 2 | 4 | 8
Method 2 | 7 | 2 | 0 | 3 | 1 | 4
Method 3 | 51 | 30 | 34 | 62 | 41 | 28
I can't quite get my head around how to formulate this query though. I tried this:
index=...etc... | bucket _time span=10m | stats count by _time,LogSource | table count, LogSource, _time
Which is sort of "transposed" from what I really want, it's formatted like this instead:
_time | Count | LogSource
2/7/13 9:50:00.000 AM | 4 | Method 1
2/7/13 9:20:00.000 AM | 10 | Method 1
2/7/13 9:20:00.000 AM | 34 | Method 3
2/7/13 9:40:00.000 AM | 2 | Method 2
2/7/13 10:00:00.000 AM | 8 | Method 1
2/7/13 9:40:00.000 AM | 30 | Method 3
How can I turn this into the query I want?
Try this instead of your stats | table:
... | chart count over LogSource by _time
To get the X minutes you may want to eval yourself a new field with the time differences.
You can transpose such results with "xyseries", but probably you will have to transform the _time column to something ad-hoc. E.g.
index=...etc...
| bucket _time span=10m
| stats count by _time, LogSource
| table count, LogSource, _time
| convert(_time) as time timeformat="%H_%M"
| xyseries LogSource time count
Try this instead of your stats | table:
... | chart count over LogSource by _time
To get the X minutes you may want to eval yourself a new field with the time differences.