We have the issue with the Splunk forwarder, which we would like to understand. We monitor one of the directories for the pattern dev_*. The point is that there is a file there, dev_tp_23480, which is created, then deleted, then again created from the application side.
The issue is that apparently Splunk sets a lock and the second creation of the file by the application is not possible anymore, we get an error.
After Splunk forwarder gets switched off, all runs fine again, the dev_tp_23480 can be created. So the issue has definitely something to do with Splunk.
We do not need this file actually in Splunk, so I have set the blacklist on dev_tp as a workaround, but I am really curious to understand the root cause as it can have an impact on several landscapes.
Also, we took a trace of the file accesses (please see picture / attachment) and we clearly see that the splunkd is accessing/checking this file with really high frequency.
Actually, from the configuration interval (15 sec) I would expect splunkd checking files only 15 sec.
Do I understand it wrong?
And if splunkd checks the files in the realtime, isn't it a bit resource intensive? Can it be parametrized? (frequency)