Firstly to max out the indexer processor may have negative impacts on the performance. (By the way, that maxKBps relates to networking output from a universal forwarder or a splunk indexer forwarding but if you release any kind of cap on it on a work network you can risk flooding the network or blocking an indexer)
There are different "queues" within Splunk that handle different jobs, as an example we're talking about things like;
Line breaking
Regex field extraction
Timestamping
Writing unwanted logs to null queue (sending them into the abyss)
Perhaps some additional processing on something like windows event logs
If a queue starts to block then it will delay or cause other events to drop depending on how they are being indexed. Also consider that Splunk can monitor and index local log files which allow it to index and read changes relatively fast when compared to a network connection where you may be forwarding events from another machine which will be restricted by the disk IO on the remote server (it may be in use or idle) and the network traffic. This is also over TCP which may require data to be re-transmitted if dropped en route.
You may also be logging syslog from remote systems which may be coming in on an UDP connection which again causes another area for potential problems.
I haven't really looked a great deal at your maths, assuming you've taken the right figures then I am sure it probably reflects a clean setup on a clean network and reading in local files, realistically the only way to really have any idea is to perform a proof of concept and actually gauge the performance of your systems indexing into Splunk as well as a realistic number of EPS.
EDIT: Also, in relation to the quote above I believe although I can't say for sure that the point is that vendors who quote EPS figures are giving overly optimistic results, Splunk knows that data varies but it does have a very good indexing engine, to see what you can do with it then the best option is to try 🙂
... View more