Monitoring Splunk

Splunk Indexer error


We received the below error in splunkd.log on our indexer server. We are using cluster env with 6 indexers. The indexers are coming up and down

WARN IndexerService - Indexer was started dirty: splunkd startup may take longer than usual; searches may not be accurate until background fsck completes.

11-09-2020 00:10:41.703 +0000 WARN IndexConfig - Max bucket size is larger than destination path size limit. Please check your index configuration. idx=some_index; bucket size in (from maxDataSize) 750 MB, homePath.maxDataSizeMB=256, coldPath.maxDataSizeMB=0

Labels (2)
0 Karma


Is some_index a new index that was recently set up? If so, did the errors start happening after some_index was introduced?

That IndexConfig warning means that Splunk looked over your indexes.conf and found that some_index has some issues. This warning won't stop Splunk was starting, but your data in that index will become sad overtime. Looking at the output your maxDataSize is 750MB (meaning the max size your hot bucket(s) will reach before triggering a roll to warm), while your homePath.maxDataSizeMB is 256MB (which is the max size of hot+warm). Your cold storage is also set to 0MB, which will make Splunk sad.

Your IndexerService error message may be related to some_index as per this answers post. I'd try and fix some_index's config and restarting to see if that error goes away.


Hope this helped!