- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
What happens if the maximum size of Splunk index is exceeded
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

tl;dr; It should continue writing but will drop the oldest data if configured correctly.
In Splunk, the index size can indeed exceed the maximum limit if not properly configured, which can result in data deletion or suspending data writing. The critical settings to monitor are maxTotalDataSizeMB, frozenTimePeriodInSecs, homePath.maxDataSizeMB, and coldPath.maxDataSizeMB.
maxTotalDataSizeMB: This setting specifies the maximum overall size (in MB) allowed for an index. When this limit is reached, Splunk automatically triggers a process that rolls the oldest data to the "frozen" state, which by default is set to deletion if coldToFrozenScript or coldToFrozenDir settings aren't configured.
frozenTimePeriodInSecs: This defines the timeframe data can remain in the index before being frozen. Once the time elapses, the data is typically deleted unless alternative archiving options are specified.
homePath.maxDataSizeMB: This setting controls the maximum size of the home path, encompassing hot and warm buckets. If this threshold is surpassed, older buckets move to cold storage.
coldPath.maxDataSizeMB: This defines the maximum size for cold storage. Exceeding this limit results in the freezing of older buckets.
If an index is reaching its limits frequently, consider evaluating your data volumes, review these configurations, and ensure that your Splunk setup can handle projected growth, to avoid unintended data loss or performance degradation.
Splunk will start rolling data to frozen when either the frozenTimePeriodInSecs or maxDataSizeMB is met, whichever comes first. This could mean that even if you expect 30 days of data, if there is only enough disk space for 10 days then it will start rolling to frozen (which may mean deletion) sooner than expected.
If your disk space drops below the value set in server.conf/[diskUsage]/minFreeSpace (defaults to 5000mb) then it will stop letting you execute searches.
Please let me know how you get on and consider upvoting/karma this answer if it has helped.
Regards
Will
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

If it is just a single index that you want to be 80% of your available storage then set maxTotalDataSizeMB for that specific index in indexes.conf to 0.8*<AvailableSpaceInMB>.
However...be aware that if you have multiple indexes then that value is for a single index only, and even if you set it for each index then it means each index can use 80% of your storage.
Instead you can configure a Volume for all of your indexes to be stored in. See the indexes.conf docs for more examples but as a brief overview:
[volume:yourVolume]
path = /mnt/big_disk2
maxVolumeDataSizeMB = 1000000 << This would be 0.8 * <SizeOfDiskInMB<
# index definitions
[idx1]
homePath = volume:yourVolume/idx1
coldPath = volume:yourVolume/idx1
# thawedPath must be specified, and cannot use volume: syntax
# choose a location convenient for reconstitition from archive goals
# For many sites, this may never be used.
thawedPath = $SPLUNK_DB/idx1/thaweddb
It is important to remember to set the home/cold path to use your volume!
You can specify multiple volumes for different indexes or hot/cold data depending on your storage configuration and requirements.
Please let me know how you get on and consider upvoting/karma this answer if it has helped.
Regards
Will
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
To configure Splunk to limit an index to 80% of its maximum size and prevent further data from being written. Define the maximum size for your index in the indexes.conf file using the maxTotalDataSizeMB attribute. For example, if you want the maximum size to be 100 GB
[your_index_name] maxTotalDataSizeMB = 102400
To enforce the 80% limit, you can use the maxVolumeDataSizeMB attribute within a volume configuration. This attribute specifies the maximum size for the volume, and you can set it to 80% of the total size. For example, if the total size is 100 GB, set the volume size to 80 GB
[volume:your_volume_name] path = /path/to/your/volume maxVolumeDataSizeMB = 81920
Configure maximum index size - Splunk Documentation
