As @deepakc already mentioned, there are many factors for sizing _any_ Splunk installation, not even going into ES. And with ES even more so. With ES there is much going on "under the hood" even bef...
See more...
As @deepakc already mentioned, there are many factors for sizing _any_ Splunk installation, not even going into ES. And with ES even more so. With ES there is much going on "under the hood" even before you enable any correlation searches (manging notables, updating threat intel, updating assets database and so on). Of course for any reasonabe use cases you also need decently configured data (you _will_ want those datamodels accelerated so you will use resources for summary-building searches). And on top of that - there are so many ways you can build a search (any search, not just correlation search in ES) wrong. I've seen several installations of Splunk and ES completely killed with very badly written searches which would be "easily' fixed by rewriting those searches properly. A simple case - I've seen a search written by a team which would not ask their Splunk admins for extracting a field from a sourcetype. So they manually extracted the field from the events each time in their search. Instead of simply writing, for example index=whatever user IN (admin1, admin2, another_admin) which - in a typical case limits the set of processed events pretty good at the start of your search they had to do index=whatever | rex "user: (?<user>\S+)" | search user IN (admin1, admin2, another_admin) Which meant that Splunk had to check for the field through every single event from the given search time range. That was a huge performance hit. That's of course just one of the examples, there are many more antipatterns that you can break your searches with