Hey All,
Our Splunk environment is deployed in the Azure cloud as an "on-prem" installation and we are trying to use blob storage as our cold storage. We were able to mount the blob storage using blobfuse and the associated libs. We setup our indexers to mount the storage on boot to ensure we always have access to the storage. We also setup a SSD temp location required by blob for write of the cold index before its put on blob. After all of this setup I created a test index and specified the required locations in indexes.conf for this temp mount which will then write to blob. After setup I did see Splunk write the colddb directory and the index directory underneath that.
The question I have surrounds the index/filesystem validation splunk performs on service start/restart. I know it uses the locktest utility as specified in this document https://docs.splunk.com/Documentation/Splunk/7.3.1/Troubleshooting/FSLockingIssues
I don't really feel comfortable totally disabling the filesystem check all together. What confuses me is how Splunk can write the directories with no issues proving the filesystem is ok but then fail on this check. Anyone have any insight in how this utility works?
The temp mount we write to is a splunk approved filesystem (xfs).
Just trying to figure out if its possibly to not fully disable this filesystem check while continuing to use our setup for cold storage.
Thanks,
Andrew
Bumping the post