Hi All,
I have few concerns regarding buck rolling criteria my question is more focused on hot bucket.
So we have 2 types of index
1. Default
2. Local or customized index
So when I check the log retention of default index
Hot
it shows 90 days
Maxbucketcreate=auto
Maxdbsize=auto
And we don't define anything for for local index
So while checking we fig out like for a particular index we can only have 55 days of logs in out hot bucket n when we see the log consumption for this index is nearly about 12-14gb per day
And for other local index we can see more than 104 days of logs
My concern is what retention policy splunk is following to roll the bucket for local index
1. 90 days period (which is not happening here)
2. When the hot bucket is full per day wise basis( if splunk is following this then how much data a index can store per day n how many hot bucket we have for local index n how much data each bucket can contain)
Hope im not confusing
Thanks
Hi @debjit_k ,
Yes, maxTotalDataSizeMB defines the total storage of hot+warm+cold.
When an index exceeds this value, the older buckets will be automaticaaly discarded without any attention to the retention, for this reason I hint to put much attention to the Capacity Plan, to avoid to discard data inside the retention period that you could need (e.g. regulation requirements).
If you need to maintain these data after the retention period or when the max side is reached, you should create a script that saves data in another location in offline way.
Anyway it's possible to resume these frozen data as thawed.
To know how many logs you daily index you can use the Monitoring Console, anyway, the algorythm to calculate storage occupation is the one I described in my previous answer:
daily_index_rate * 0.5 * retention_period_in_days,
even if the best way is using the calculator I already shared.
use always a safe factor because a bucket will not be immediately deleted but when the latest event exceeds the retentin period.
Tell me if I can help you more, otherwise, please, accept one answer for the other people of Community
Ciao and happy splunking
Giuseppe
P.S.: Karma Points are appreciated 😉
Hi @debjit_k ,
Yes, maxTotalDataSizeMB defines the total storage of hot+warm+cold.
When an index exceeds this value, the older buckets will be automaticaaly discarded without any attention to the retention, for this reason I hint to put much attention to the Capacity Plan, to avoid to discard data inside the retention period that you could need (e.g. regulation requirements).
If you need to maintain these data after the retention period or when the max side is reached, you should create a script that saves data in another location in offline way.
Anyway it's possible to resume these frozen data as thawed.
To know how many logs you daily index you can use the Monitoring Console, anyway, the algorythm to calculate storage occupation is the one I described in my previous answer:
daily_index_rate * 0.5 * retention_period_in_days,
even if the best way is using the calculator I already shared.
use always a safe factor because a bucket will not be immediately deleted but when the latest event exceeds the retentin period.
Tell me if I can help you more, otherwise, please, accept one answer for the other people of Community
Ciao and happy splunking
Giuseppe
P.S.: Karma Points are appreciated 😉
Hi @gcusello
Really appropriate for helping me this much to understand the concept of the bucket.
Thank
Debjit
Hi @gcusello
sure sure.
after all it’s a community’s where we share our problems or knowledge to learn more.
I’m little bit worried with the concern which I’m having with my environment, if I figure out somehow I will post it here.
thanks