In my production environment, I have around 4 Splunk environments.
I keep observing every alternate day, in splunk web console, I get the below error. Obviously, my next move is to delete the warm dbs something like this "db_1520234296_1519816201_101" from the below path to free up the space.
'/opt/splunk/indexer/var/lib/splunk/_internaldb/db
Search peer server has the following message: Disk Monitor: The index processor has paused data flow.Current free disk space on
partition '/opt/splunk' has fallen to 4988MB, below the minimum of 5000MB.Data writes to index path '/opt/splunk/indexer/var/lib/splunk/_internaldb/db can't safely proceed. Increase free disk space on partition '/opt/splunk' by removing or relocating data.
I would like to understand, what is the impact to my production data if i delete this logs and what type of data does this _internaldb logs contain. Can someone explain me on this. Thanks.
The _internal index contains Splunk log files like splunkd.log. Deleting _internal buckets has no effect on your production data, but it may make it more difficult to troubleshoot problems.
Also, while the log may say Splunk won't write to _internal, that doesn't mean _internal is the cause of the problem. Review all of the usage of that disk to see what is consuming it. If there are non-Splunk applications writing to the same space, try to separate them. Review the retention policies of your indexes to see if you are storing more data than necessary. If so, changing your indexes.conf settings can reduce the amount of space used.
Since this happens often, consider moving $SPLUNK_DB to larger storage, preferably separate from $SPLUNK_HOME.
The _internal index contains Splunk log files like splunkd.log. Deleting _internal buckets has no effect on your production data, but it may make it more difficult to troubleshoot problems.
Also, while the log may say Splunk won't write to _internal, that doesn't mean _internal is the cause of the problem. Review all of the usage of that disk to see what is consuming it. If there are non-Splunk applications writing to the same space, try to separate them. Review the retention policies of your indexes to see if you are storing more data than necessary. If so, changing your indexes.conf settings can reduce the amount of space used.
Since this happens often, consider moving $SPLUNK_DB to larger storage, preferably separate from $SPLUNK_HOME.
Well although it is not your own ingested data, the impact is you can't debug actions and assure accountability of what users have been doing
That is quite a security issue you have in case you need to justify or figure out internal sequence of events
You are unconsciously covering tracks