I am getting this message, when I start splunk (version 4.2): 'You are low in disk space on partition "/opt/splunk/var/lib/splunk/audit/db". Indexing has been paused. Will resume when free disk space rises above 200MB.'
It seems like some unnecessary duplicate files are being created in the folder: /opt/splunk/var/log/splunk which is affecting the disk space. For example: metrics.log1, metrics.log2, etc, splunkd_access.log1, splunkd_access.log2, etc. After I delete these extra files, I get my splunk working correctly without these error message. I was wondering if there was some configuration or setting somewhere where I could switch of ( stop) writing to these redundant files, so that it would not create these extra files.
I don't think there is a different partition for different folders...they should be on the same partition. The log.cfg is configured to log in INFO for the most part (I do not see any DEBUG level in there.) The files I mentioned in the /opt/splunk/var/log/splunk get upto 25 MB in size (which I see is the value defined in the log.cfg file: appender.metrics.maxFileSize=25000000).
There is also another property below that line in the log.cfg that says: appender.metrics.maxBackupIndex=5
I guess that is the one that is creating the multiple files with same name but increasing number in the extension?
So should I change the maxFileSize to a lower value or maybe set the maxBackUpIndex to a lower number?
Does the audit index actually have it's own partition like the message says? If so then I dont see it can be related to any logs over in /opt/splunk/var/log/splunk.
and how big are the files in /opt/splunk/var/log/splunk? unless you have configured log.cfg to log in DEBUG, splunkd.log and web_service.log should not get particularly large. And metrics and access logs dont have log_level so they're not effected by log.cfg but they shouldnt get very big either.