I am keen to get an idea on some best practice for how to estimate impact on our Splunk deployment to suit a client request.
For example, the client might say they want real-time monitoring across multiple log sources rendered in a dashboard with other complex correlations.
The resulting question is do we need to scale our deployment to suit their specific need? It may mean minor changes to the Search Head (e.g. more cores, more memory) or it may mean we need to significantly adjust our deployment model (e.g. purchase more storage; build more Search Heads and Indexers; use a Heavy Forwarder to pre-process rather than a Universal Forwarder...).
If we do need to make such changes, it might result in a determination that the cost of updating the Splunk deployment to suit the client’s request is not commensurate with the benefit derived from the client's monitoring.
I have asked around and got some good suggestions (see dot points below) but they are after we would have spent some time trying to pull the data in, write the query, etc. I'm particularly interested in trying to make such a determination well before we get to that point. I'm sure others have experienced this many times and am hoping at least one of you out there is happy to impart some quality advice.
Determine time to complete query
Determine the burden of concurrent searches on a given search head over time