Getting Data In

indexes.conf sanity question.

JDukeSplunk
Builder

I wanted to ask here before making this change, for just another set of eyes.

Issue. We have /hot and /cold both with equal amounts of storage, with no difference between the storage speed on either volume. Currently data is rolling to cold at 90 days, and so cold is filling up and leaving hot about 20% full.

I'd like to set the following to try and keep data in hot/warm for almost 1/2 of our global 13 month retention period. Do the following settings make sense?

[default]
#######retentions and hotwarm limits#######
repFactor = auto

#To balance disk space keep warm more warm buckets that the default 300.
maxWarmDBCount = 3600

#Idle hot buckets roll to warm if no data is written to them in a day.
maxHotIdleSecs = 86400

#Upper bound of timespan of hot/warm buckets, in seconds.
maxHotSpanSecs = 15778476

#13 Months and data will roll to bitbucket unless a frozen directory is specified in their stanza.
frozenTimePeriodInSecs = 34136000

#Data coming in on an unconfigured index will land in sandbox.
lastChanceIndex = sandbox

Thanks.

0 Karma

woodcock
Esteemed Legend

You are doing it all wrong. You need to forget about the time-based settings and configure volume-based settings. That way you can let hot/warm fill based on size. Better yet, do that AND create a logical volume that contains both your current hot/warm and your cold and then don't configure cold at all. If it is the same storage type, it should be the same volume and there is no need to complicate things by having cold at all.

0 Karma

JDukeSplunk
Builder

I hear what you are saying, and it makes sense. However...

We do not control the disk settings, they are what they are, /hotwarm is N-tb and /cold is N-tb.

You're talking about setting homePath.maxDataSizeMB right? So what happens if I set that to say, 500GB on index1, but over the period of a year that index1 never exceeds that limit. It will never roll to cold, but will get rolled to frozen after 13 months. So now hotwarm will fill and the storage in cold will never be utilized by index1.

Or.. If I abandon all time-based settings as you suggest and just set wholesale caps on the data at the volume level with, maxVolumeDataSizeMB for hotwarm and cold and at an index level using homePath.maxDataSizeMB and coldPath.maxDataSizeMB . We now have low volume indexes keeping more than 13 months because they never reached that cap, which we don't want. Also,what happens when someone gets goofy and turns on debug log levels on 3000 hosts and fills the max sizes overnight? All of my historical is now gone and our 13 month retention period requirement is shot. I mean, I'd rather the disk fill, Splunk fall over and not log new debug crap than push my good data into a bit bucket or frozen.

0 Karma
Get Updates on the Splunk Community!

Introducing the 2024 SplunkTrust!

Hello, Splunk Community! We are beyond thrilled to announce our newest group of SplunkTrust members!  The ...

Introducing the 2024 Splunk MVPs!

We are excited to announce the 2024 cohort of the Splunk MVP program. Splunk MVPs are passionate members of ...

Splunk Custom Visualizations App End of Life

The Splunk Custom Visualizations apps End of Life for SimpleXML will reach end of support on Dec 21, 2024, ...