So, I got the 150TB cold, but they are mounted into /mnt/splunk1/cold and /mnt/splunk2/cold. I figured that may cause issues with the indexers, so I made symlinks to
/opt/splunk/var/lib/splunk/cold on each of the indexers to prevent issues with which indexer Splunk wants to write to.
I am now thinking about changing the indexes.conf and adding to the volume stanza:
# One Volume for Cold [volume:cold] path = /opt/splunk/var/lib/splunk/cold # 150000GB (150TB) maxVolumeDataSizeMB = 150000000
Then changing the cold locations from:
coldPath = volume:primary/defaultdb/colddb
coldPath = volume:cold/defaultdb/colddb
The ES definitions are:
coldPath = $SPLUNK_DB/audit_summarydb/colddb
I would like to change that too, similar to above:
coldPath = volume:cold/audit_summarydb/colddb
maxWarmDBCount = 50
maxHotSpanSecs = 2592000
Anything else I should or shouldn't have done?
Well, the above worked for me. In our case we have 675GB SSD RAID 1 each on two indexers and they were full with the default settings. I finally got the 150TB of spinning drives mounted in as cold but nothing was rolling over to it. So I did a search of my data to see how far it went back. Not sure this was scientific in anyway but I decided to 1/3 the default settings above with the end results being .
The end result was I brought my hot drives to 60% and 72% utilization. So we will go forward with this config until I get more hot drives.