I have two indexers, a search head, and universal forwarders. Post 6.5 upgrade, I am seeing a ton of these messages on my indexer splunkd.log
INFO DatabaseDirectoryManager - Getting size on disk: Unable to get size on disk for bucket id=os~158~812F771A-35F3-4538-833D-F47FB7CB17E5 path="/splunk-indexes/default/os/colddb/db_1440115485_1440004578_158" (This is usually harmless as we may be racing with a rename in BucketMover or the S2SFileReceiver thread, which should be obvious in log file; the previous WARN message about this path can safely be ignored.) caller=getCumulativeSizeForPaths
Hi sbrice,
Looks like index=os (aka *Nix app) buckets rolling to frozen/archive, while being scanned for size, perhaps? What does your default index have set for data retention?
the latest time in epoch in your bucketId is sometime mid august 2015.
You can use | dbinspect to investigate the bucket lifecycle, or navigate to your colddb to verify that the bucket no longer exists...searching your logs for the bucketId should tell you the story.
Being it is an info log, a quick grep for any ERROR or WARN in splunkd.log should ensure you see any items of concern.
Hi sbrice,
Looks like index=os (aka *Nix app) buckets rolling to frozen/archive, while being scanned for size, perhaps? What does your default index have set for data retention?
the latest time in epoch in your bucketId is sometime mid august 2015.
You can use | dbinspect to investigate the bucket lifecycle, or navigate to your colddb to verify that the bucket no longer exists...searching your logs for the bucketId should tell you the story.
Being it is an info log, a quick grep for any ERROR or WARN in splunkd.log should ensure you see any items of concern.
Thank you! made the changes to frozen/archive policy, restarted the indexer and logs have cleared up.
@sbrice can you elaborate on the changes you made?