We actually get this question quite a lot in the support team, and my usual response is:
What kind of performance stats are you looking for?
Splunk has 2 main operations, indexing and searching. Both of these operations are dependant upon the hardware resources available, the more resources, the faster Splunk will run. I'm not just referring to CPU and memory, Splunk is also very i/o intensive, so the speed of your storage volume is also very important. Further, if you intend to use RAID, that will also affect the performance numbers. Splunk recommends RAID 0 for best performance, and the recommended hardware config is detailed here
The performance of your server is also dependant on the data you are indexing and searching on. If you are just interested in standard single-line syslog, containing key = value data, Splunk will handle that data like a champ, and eat it up as fast as you can feed it in, provided that your disk is fast enough. If all of your events are multi-line however, with varying lengths, data format etc, Splunk will be slower to index it and searching will also be impacted.
The only way to know for sure how Splunk will perform with your data, is to run some tests with real data samples. There is an app on Splunkbase here that will help you with this, it's mainly a sequence of CLI commands that runs a test with a dataset you specify.
As you can see, there's no easy answer to this question, as there are a lot of dependancies, but on a well-tuned, beefy server we would expect to see average indexing thruput of 4 - 7 Mb/sec. Anything higher than that would likely impact search performance.
... View more