Deployment Architecture

How to determine age of events in frozen buckets?

wmedeiros
New Member

Hi!
I configured my deployment to store frozen buckets. But, now i have only 15% empty in my partition.
I need to delete some buckets. I want to know how to determine age of events in frozen buckets for resolves this problem.

Tags (1)
0 Karma

niketn
Legend

@wmedeiros, you can use dbinspect command to perform this analysis on volume/age by buckets:

https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Dbinspect

____________________________________________
| makeresults | eval message= "Happy Splunking!!!"

lycollicott
Motivator
db_<newest_time>_<oldest_time>_<localid>_<guid>

The times are in UTC epoch.

http://docs.splunk.com/Documentation/Splunk/7.0.0/Indexer/HowSplunkstoresindexes

wmedeiros
New Member

Thanks.
I have this bucket "db_1467575622_1467543210_820". I understood that the newest_time=1467575622 and oldest_time=1 467 543 210.
This time are in seconds, but how to transform this time in Date?

1 day has 87400 seconds (24*60*60).
1 year has 31536000 seconds (87400*365)
1467575622 seconds are 46.53 years (1467575622/31536000)

How to determine the age of these events ?

0 Karma

lycollicott
Motivator
0 Karma

harsmarvania57
Ultra Champion

Those are epoch time, please use https://www.epochconverter.com/ to convert epoch time to human readable format.

0 Karma
Career Survey
First 500 qualified respondents will receive a $20 gift card! Tell us about your professional Splunk journey.
Get Updates on the Splunk Community!

Thanks for the Memories! Splunk University, .conf25, and our Community

Thank you to everyone in the Splunk Community who joined us for .conf25, which kicked off with our iconic ...

Data Persistence in the OpenTelemetry Collector

This blog post is part of an ongoing series on OpenTelemetry. What happens if the OpenTelemetry collector ...

Introducing Splunk 10.0: Smarter, Faster, and More Powerful Than Ever

Now On Demand Whether you're managing complex deployments or looking to future-proof your data ...