My application generates around 100,000 files per day. Although I tested to index them by monitoring files, it took almost a week. Anybody know cause of the issue and solution? I think it will be improved if number of active log files were not so many, but I have to do so in our environment.
It is possible to run multiple instances of Splunk forwarders. This is particularly easy on *nix systems. With this number of files, you may want to investigate this solution.
There is no hard limit and it will require some testing to discover the breaking point. This question, looking for the same answer, basically says the same:
How are the files being written now? Are they in separate directories? Is there a naming convention?
Is there any sizing guideline of number of files that a forwarder can monitor? I would like to make plan how many forwarders is required for my case.