If an index is kept small due to a low default setting, how can I have splunk reindex a large pool of data once I raise maxTotalDataSizeMB in indexes.conf? The only ideas I've come up with thus far are attempting to use the delete command, blowing away the whole index so it can be reparsed, or just turning on the inputs with the data again and hope that splunk does the right thing.
If a low maxTotalDataSizeMB caused your buckets to be frozen (which basically means that your older buckets were "deleted", unless you setup an archive script with the coldToFrozenScript option), then you really don't have much of a choice.... Because your older buckets were already removed so you only option is to re-index your data if you want it back.
However, you will now run into a secondary issue. Splunk keeps track of which log files it has already indexed, so simply re-enabling (turning back "on") your inputs will not cause splunk to index the same events again. (This feature is what keeps rotated log files from being re-indexed, which is great, but it works against you in this case.) You will have to somehow trick splunk into thinking that you files have changed. (Then you may end up needing the "delete" command to carefully remove only the duplicate events.) You can trick splunk into thinking that your log files have changed by setting a different value to crcSalt on your input stanzas, but be aware that this can lead to other problems.)
If you want to reload absolutely everything within your splunk environment, then you can use the "splunk clean all" command, but be sure to read up on this before your try it. It will remove all your splunk data, but that will allow you to re-index everything since this forces splunk to forget about what files it previously indexed. This could get very ugly.)