Splunk Enterprise

coldPath.maxDataSizeMB configuration

rayar
Contributor

We have 6 indexers , each 9 T

also we have ~ 100 indexes with different retention time 

we are indexing ~ 2 TB of data daily  

all our indexers reached ~99% FS and we can't add more storage 

I would like to set coldPath.maxDataSizeMB  since looks like we have some issue with deleting the cold buckets 

Does it make sense to set coldPath.maxDataSizeMB = 5242880 (5 TB) ?

how I can calculate the values  ? 

I have this tool https://splunk-sizing.appspot.com/ but I can't use since I have different retention time for each index  

 

Labels (1)
0 Karma
1 Solution

gcusello
SplunkTrust
SplunkTrust

hi @rayar ,

sincerely, I think that this question isn't adapt for this context,

I think that the correct approach is to engage a Splunk Consultant or a Splunk PS to analyze your situation on field.

In Community, we could give you some idea but, in your position, I'd be more sure with a detailed studio, analyzing the situation, because I can image that your data are relevant for your company! 

Anyway, having you different retentions in the indexes (that I think you must respect!), maybe the correct approach is to analyze each index and move some of them in another storage, to have the necessary free space.

Than, surely you have a cluster, maybe reducing (if possible) the Search Factor or the Replication Factor you could save more space.

As I said it's a very large problem to analyze in few words!

Ciao and good luck.

Giuseppe

View solution in original post

0 Karma

gcusello
SplunkTrust
SplunkTrust

hi @rayar ,

sincerely, I think that this question isn't adapt for this context,

I think that the correct approach is to engage a Splunk Consultant or a Splunk PS to analyze your situation on field.

In Community, we could give you some idea but, in your position, I'd be more sure with a detailed studio, analyzing the situation, because I can image that your data are relevant for your company! 

Anyway, having you different retentions in the indexes (that I think you must respect!), maybe the correct approach is to analyze each index and move some of them in another storage, to have the necessary free space.

Than, surely you have a cluster, maybe reducing (if possible) the Search Factor or the Replication Factor you could save more space.

As I said it's a very large problem to analyze in few words!

Ciao and good luck.

Giuseppe

0 Karma

rayar
Contributor

thanks a lot for your inputs , I will accept and contact PS

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @rayar ,

as I said, if your data are important (and I'm sure that they are) this is the correct approach because they are many and important!

Ciao and Good luck.

Giuseppe

P.S.: Karma points are appreciated!

0 Karma
Get Updates on the Splunk Community!

Index This | What’s a riddle wrapped in an enigma?

September 2025 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with this ...

BORE at .conf25

Boss Of Regular Expression (BORE) was an interactive session run again this year at .conf25 by the brilliant ...

OpenTelemetry for Legacy Apps? Yes, You Can!

This article is a follow-up to my previous article posted on the OpenTelemetry Blog, "Your Critical Legacy App ...