Hi,
Is there a way I can see what is happening whine my volumes reach 100% capacity - they are purging data, I want to see the internal message it states upon purge.. As well as any other valuable info upon purge. I can view its capacity in the DMC, however not the purging messages/process.
I believe you want to know about bucket rollover when it hits the configured maxSize. You can see the rollover logs in 'splunkd.log' and you can change the logging levels in log.cfg to INFO to see additional details. Hope this helps.
I believe you want to know about bucket rollover when it hits the configured maxSize. You can see the rollover logs in 'splunkd.log' and you can change the logging levels in log.cfg to INFO to see additional details. Hope this helps.
Thanks for getting back to me.. The issue im facing is that even hot buckets/warm buckets are purging within their retention period.. So while I can check buckets rolling, I would also like to search for any info relating to the purge itself. One of the things I have found in my internal logs is volume=primary Trimming done.
So, it appears that the disk space allocated for the SPLUNK_DB (usually /opt/splunk/var/lib/splunk) is not getting enough space as something else is consuming them faster. Possibly the splunk logs or other processes/services are taking up your disk space. On the other hand, if you have pointed your splunk_db to another mount point, you could check the usage there. On a linux system du -smh * and df -kh can help you to look at the disk/file space/usage.
Thanks, it seems its that our volume is onboarding new data a lot faster than our retention rules can handle hence the pruning of data.
Just a hint for configuring volumes: Make sure you create separate volumes for indices with different retention times. Volume pruning based on size limits happens independently of configured retention, so if you mix - for example - indices with 30 and 90 day retention in the same volume, you may age out 90-day data sooner than you want to.