Deployment Architecture
Highlighted

Scale Bucket size for time series data

Explorer

I am trying to generate a chart where the x-axis scales over time. So the columns would be:
1 hour, 1 day, 1 week, 1 month, 3 months, 6 months, 1 year

Do I need to create each bucket individually, or can I do it with some sort of log scale? The columns don't need to be exactly the date ranges above.

0 Karma
Highlighted

Re: Scale Bucket size for time series data

SplunkTrust
SplunkTrust

Assuming you want to look back from the end of the time range into the past, here's an idea:

  index=_internal | addinfo | eval diff = info_max_time - _time
| eval bucket = case(diff < 60, "1m", diff < 3600, "1h", diff < 86400, "1d") | chart count by bucket

I'm grabbing the end of the time range to calculate how "old" an event is, then I'm shoving the events into custom buckets and charting by that.

View solution in original post

0 Karma
Highlighted

Re: Scale Bucket size for time series data

Splunk Employee
Splunk Employee

You can also (if you want) try something like:

... | addinfo | eval diff= info_max_time - _time | bucket span=1log10 diff | chart count by diff

You can experiment with different values for the span, e.g., 1log2 or 1.2log10. This will give you time differences in seconds. The number of seconds might be awkward to work with though, so the method above with case() would be better in that case.

0 Karma
Highlighted

Re: Scale Bucket size for time series data

Explorer

I was able to use this example with a few changes:

index=_internal | addinfo | eval diff = info_max_time - _time | eval bucket = case(diff <= 86400, "1 day", 86400 < diff AND diff <= 172800, "2 days", 172800 < diff AND diff <= 604800, "1 week", 604800 < diff AND diff <= 1209600, "2 weeks", 1209600 < diff AND diff <= 2628000, "1 month") | chart count by bucket
0 Karma
Highlighted

Re: Scale Bucket size for time series data

SplunkTrust
SplunkTrust

Great... I'll mark this as solved?

0 Karma