We have 6 indexers , each 9 T
also we have ~ 100 indexes with different retention time
we are indexing ~ 2 TB of data daily
all our indexers reached ~99% FS and we can't add more storage
I would like to set coldPath.maxDataSizeMB since looks like we have some issue with deleting the cold buckets
Does it make sense to set coldPath.maxDataSizeMB = 5242880 (5 TB) ?
how I can calculate the values ?
I have this tool https://splunk-sizing.appspot.com/ but I can't use since I have different retention time for each index
hi @rayar ,
sincerely, I think that this question isn't adapt for this context,
I think that the correct approach is to engage a Splunk Consultant or a Splunk PS to analyze your situation on field.
In Community, we could give you some idea but, in your position, I'd be more sure with a detailed studio, analyzing the situation, because I can image that your data are relevant for your company!
Anyway, having you different retentions in the indexes (that I think you must respect!), maybe the correct approach is to analyze each index and move some of them in another storage, to have the necessary free space.
Than, surely you have a cluster, maybe reducing (if possible) the Search Factor or the Replication Factor you could save more space.
As I said it's a very large problem to analyze in few words!
Ciao and good luck.
Giuseppe
hi @rayar ,
sincerely, I think that this question isn't adapt for this context,
I think that the correct approach is to engage a Splunk Consultant or a Splunk PS to analyze your situation on field.
In Community, we could give you some idea but, in your position, I'd be more sure with a detailed studio, analyzing the situation, because I can image that your data are relevant for your company!
Anyway, having you different retentions in the indexes (that I think you must respect!), maybe the correct approach is to analyze each index and move some of them in another storage, to have the necessary free space.
Than, surely you have a cluster, maybe reducing (if possible) the Search Factor or the Replication Factor you could save more space.
As I said it's a very large problem to analyze in few words!
Ciao and good luck.
Giuseppe
thanks a lot for your inputs , I will accept and contact PS
Hi @rayar ,
as I said, if your data are important (and I'm sure that they are) this is the correct approach because they are many and important!
Ciao and Good luck.
Giuseppe
P.S.: Karma points are appreciated!