I am learning Splunk and I have built the following test environment in Docker: Splunk server running in a container, using the official docker image: image: splunk/splunk:8.2 I have another docker container, call it client where I installed the forwarder and then I added a file to monitor with the $SPLUNK_HOME/bin/splunk add monitor $MY_LOGFILE -index main -sourcetype mylog command. Everything works fine. If I append $MY_LOGFILE in the client docker container with echo "hello" >> $MY_LOGFILE command then I can see the new line in the Splunk web console. Now I am appending/feeding my log file with an endless bash counter-up loop and I can see everything in the Splunk web console. Great. My question: I would like to delete old records from Splunk to save disk space, so I followed the documentation and I did this: sudo vi /opt/splunk/etc/system/local/indexes.conf with this content [main]
maxTotalDataSizeMB=1
rozenTimePeriodInSecs=300
disabled=false As I know this allows Splunk to automatically delete old data when my index hits the 1MB size. After I have created this new config file, I restarted the Splunk Docker container (and Splunk as well manually). But actually, nothing happens. It seems that this setting is not considered, and I see the increasing number of records in the index and index size is also increasing without limitation in Splunk. I use the following commands to check index size: sourcetype=mylog | stats count as Records index=_internal source=* type=Usage idx=* | eval SIZE=b/1024 | stats sum(SIZE) by st, result: 30756.775390625 But when I stop Splunk then I am able to clean up the index with this command: splunk stop
splunk clean eventdata
splunk start But I have a scenario where I need to limit the size of the index and the disk usage that is used by Splunk index "realtime", without stop and start. What I am missing here? Thx
... View more