Hi, we have our use case here that either we'll be monitoring an approximate of 6 thousand files that are updating at random interval or monitoring a folder that will receive 6 thousand files per 15 minutes that has retention period of 3 months. License-wise, the latter case is the good option but I'm worried about its performance.
We are planning on either using universal or heavy forwarder for this. Will the heavy/universal forwarder's system requirement specified in Splunk Docs be enough in this case? Will adjusting the ulimits enough to monitor a folder in the latter case?
Thank you and have a nice day!
If the folder structure for the 6000 files is complex, you should do everything in your control to make the monitor stanzas as specific as possible.
Using wildcard monitor statements over deep file systems has a significant performance impact, so if this can be avoided it would be of benefit.
As long as the box is sufficiently resourced (Network/Memory/IO) I don't think you have too much too worry about - the Splunk recommended ulimit is 64k.
Personally, I think I would opt for option 1 (files updated at random), as this (presumably) would stagger the changes throughout an arbitrary 15 minute period, versus one big change every quarter of an hour. - I also don't understand your reference to licencing.
Your biggest challenge will be making sure your indexing pipelines are big enough to keep up with the rate of change, though you have not mentioned anything about volume.