We are looking to install a splunk universal forwarder to collect a debug log from an AD domain controller and the log can see peaks around events around 5,000 eps. Will the forwarder be able to handle this and what is the max number of events it can handle/can the indexer keep up?
Yes, I wouldn't expect the UF to be the bottleneck when reading from a monitor input; however, at that rate you'll probably need to adjust limits.conf to raise the default 250KBps forwarding limit:
maxKBps option and limiting a Forwarder's rate of thruput
The indexer is the more likely bottleneck but as long as you have some CPU headroom it will probably be fine. I haven't tried this myself but it should work: you might consider sending to a test index and manipulating
maxKBps on the UF to gauge the impact before running it wide open.
Also, at that volume if you're using a custom index I would make sure to set:
maxDataSize = auto_high_volume
Otherwise, you could end up with thousands of buckets over time. Good luck; sounds like a fun project!
Well, we do have a couple of HWFs that top out at 3,500-4,000 EPS. In that case, the data (essentially HTTP logs) comes from the local filesystem using a batch input. When I was testing this configuration I tried a LWF with maxKBps = 0 and it was significantly faster, probably similar to what you'd get with a UF.
I remember that the LWF was fast enough to cause the indexers to throttle and am fairly sure it was in the tens of thousands of EPS range. I would personally be surprised if the UF was the limiting factor in this deployment but hope you post back to let us know how it went!
Thanks for the response! I think we are just going to set the UF maxKBps to 0 for unlimited but the AD group is concerned about the max eps and want a rough number on what it can handle. I know there are many factors to consider but is there any documentation which gives a rough estimate on the eps the UF can handle?