I'm looking to set up a stand-alone test Splunk instance and want to limit the disk size of the instance to 300GB.
Is this possible to do within the config files? Or do I need to install it on a separate partition that has 300GB and just let it run?
Freeze data when an index grows too large: Set maxTotalDataSizeMB
You can use the size of an index to determine when data gets frozen and removed from the index. If an index grows larger than its maximum specified size, the oldest data is rolled to the frozen state.
The default maximum size for an index is 500,000MB. To change the maximum size, edit the maxTotalDataSizeMB attribute in indexes.conf. For example, to specify the maximum size as 250,000MB:
[main] maxTotalDataSizeMB = 250000
Specify the size in megabytes.
Restart the indexer for the new setting to take effect. Depending on how much data there is to process, it can take some time for the indexer to begin to move buckets out of the index to conform to the new policy. You might see high CPU usage during this time.
Thanks but this is for an index, I would like the whole instance not to exceed 300GB.
For instance, I could have 10 indexes, but once the total space of them reaches 300GB, then Splunk will stop indexing.
Actually not setting the index size smaller than total disk space might inadvertently do what you want. If you set the max size on the index it will roll out the oldest events when the limit is reached. If you run out of disk space it will cause a system alarm and stop indexing. Example: "skipped indexing of internal audit events will keep dropping events until indexer congestion is remedied. Check space and other issues that may caused indexer to block"
Of course this is a symptom, not a solution to your request.