The speed of processing depends on how many cores you have on the indexer along with how many indexers you have indexing the data. It also depends on if you were consuming that data within a 1 hour time frame or if it was distributed over 24 hours. If the latter case then it works very fast.
We currently have 2 clustered indexers with 16 cores each which index around 100-120 GB/day. We have 1 application which has around 5 billion events per month and takes 8 days to process the report on an accelerated data model, so we had to do a workaround and set up a summery index. So my advise is to not to buy tons of hardware for 1-2 applications, there's always a workaround to boost the performance. You also have cloud infrastructure you could leverage if you needed to
We have Splunk in production but the amount of data processing depends on your hardware. So it will be different for different environment
If you are looking for the sizing , the following reference should help you.