hello
i have index with this conf:
[ssys_emea_pj] homePath = volume:hotwarm/$_index_name/db
coldPath =
volume:cold/$_index_name/colddb
thawedPath =
$SPLUNK_DB/$_index_name/thaweddb
coldToFrozenDir =
/splunkData/frozen/$_index_name(c) igor@emet 2019-02-18
Don't forget to update those values when you resize the volume
homePath.maxDataSizeMB = 40000
coldPath.maxDataSizeMB = 10000
frozenTimePeriodInSecs = 31104000
maxDataSize = auto_high_volume
from btool :
maxTotalDataSizeMB = 500000
im getting this error:
Max bucket size is larger than destination path size limit
i don't know if this is the reason but not all my data have being indexed from s3 bucket
from the same folder, not all the sub folders are indexed
Max Bucket size error you are getting because you have configured maxDataSize = auto_high_volume
, on 64-bit OS auto_high_volume = 10GB and you have coldPath.maxDataSizeMB = 10000
which is less than 10GB (10240MB)
so what should be the right value ?
does the fact that not all the data indexed from s3 ?
I don't know how you are ingesting data and I didn't understand question regarding S3.
My data source is s3 bucket
I have a problem that not all the data there are indexed. From the same folder some of the sub folders not indexed
So i wondered if this error is the reason
By the way, my system is linux. the values you mentioned are the same ?
I'll suggest to lower maxDataSize
in indexes.conf for index ssys_emea_pj
It was 10000 when i saw this error