This has come up about one and a half times today.
Basically we want to run a search, over say the past hour, and for each of the 15minute periods in the past hour, compute a statistic like count or sum(bytes). Super trivial in timechart however we want the time intervals to be on the columns of the table, and the statistic to be in the rows. In other words the reflection of timechart's normal output.
To phrase this in terms of the internal data from the metrics log, say the timechart search would run over the last hour and look like
index=_internal source=*metrics.log group=per_sourcetype_thruput | timechart span=15m count by series
Im simplifying a bit, but say the table ends up looking like this:
_time scheduler audittrail splunkd 4/13/10 6:45:00.000 AM 30 27 30 4/13/10 7:00:00.000 AM 29 29 29 4/13/10 7:15:00.000 AM 30 26 30 4/13/10 7:30:00.000 AM 3 3 3
It's close but again in this case we actually need these columns to be the rows and these rows to be the columns.
We need this partly because we want to show a column graph where the X-axis is sourcetype, and for each sourcetype, there are 4 (non-stacked) columns, one for each of the consecutive days. Also we want this because this is the tabular format that is more familiar to the customer.
I can think of 2 solutions in the abstract but I've had no success with either:
1) Find some kind of a "transpose" command that can just reflect the data. I cant find one.
2) use chart and bin myself and literally do what timechart does but in the opposite way. This actually works fine except that the time values in the column headers are epochtime integers rather than human-readable string times. Is there a way to do something like the convert command but on the actual field names instead of field values?
index=_internal source=*metrics.log group=per_sourcetype_thruput | chart count over series by _time
Is there a way to get the desired end result? Using either of these methods, or some other way?
Going with your example from number 2), change your search to this:
index=_internal source=*metrics.log group=per_sourcetype_thruput | bin _time span=1h | convert ctime(_time) timeformat="%m/%d" | chart sum(kb) over series by _time
and you will have human readable strings in the columns like "5/18", "5/17"
One comment: there is actually a "transpose" command in splunk. It's a little clunky and you always have to eval and rename things afterwards, but it can be quite useful.