In my environment, several types of logs are stored in the log server in the following form.
~ /"Log type"/"Device name"/~.log
And by specifying in the forwarder as follows, each log type is divided into each index, and the log is transferred to the indexer.
[monitor://~/"Log type"/*/~. log]
index = "Log type"
Also, we set each log to be rotated at 2 o'clock in the early morning.
Until now it worked without problems but I noticed that only one kind of logs were transferred for several hours after the rotation was done at 2 o'clock early morning around 10 o'clock yesterday.
When indexers and forwarders were restarted,
I have confirmed that the logs that were not sent are transferred.
Is it a known phenomenon that rotation of logs causes such problems?
Is there a workaround?
You can use the filename whitelisting in your input monitoring stanza (Assuming that the log rotation renames your log files as well).
E.g. If your file that you are monitoring is named "SomeFileName_abc.log" and your monitoring stanza looks like -
And considering that after log rotation it gets renamed as "SomeFileName_abc_04302018.log", you can use the file name wildcard whitelisting as:
Following doc will help you more on this:
In the past, the following logs indicating that ”each kind of logs” began to be read were output to splunkd.log immediately after rotating.
"WatchedFile - Will begin reading ~"
However, when this phenomenon occurred yesterday, the logs indicating that ”only one kind of logs” began to be read were outputted.
Therefore I think that it failed at the stage of reading the file after rotation.
The problem is that you have too many files/directories to sort through and splunk is getting bogged down tracking everything. You need to make sure that there is a housekeeping process (
logrotate can do this ) that is deleting the older log files so they do not hang around "forever". This will only get worse.
Oh, how wrong you are! Although it is not monitoring them, it still has to sift through them and keep track of them. Once you hit thousands of files, Splunk gets bogging down and things get massively delayed. Trust me. If you don't believe me, open a support ticket and see how quickly they will tell you the same thing.
Oh...I did not know that ...
That is, the path specified by the monitor is converted to a regular expression,
When searching for a log that matches it, do you mean that it takes time to judge whether the file in the same directory matches the regular expression if many files in the directory?
And it cause a hang-up?