Capacity planning with Splunk isn't so straightforward. Got slow indexing? Add indexers. Got slow searching? Add indexers! I bet you weren't expecting that answer.
How capacity planning helps you scale your deployment
Best practices for capacity planning are to size the environment to the near-max, not the average load (unless you want to be wrong half the time). The Monitoring Console, available to admin users, contains a set of dashboards to provide insight into your deployment’s indexing and search performance, licensing, and OS resource usage. Let’s focus on the resource usage dashboards since this relates directly to pure system load and can be useful for capacity planning.
For information about the Monitoring Console in Splunk Cloud, see monitor Splunk Cloud deployment health in the Splunk Cloud User Manual.
The Splunk First 90 Days Program does not offer guidance on deployment technologies or deployment sizing because there are too many options to consider. For more information about architecture design, review the sample topologies in the Splunk Validated Architectures white paper to find repeatable topologies you can align with.
Things to do
Find highs and lows. Use the resource usage dashboards on the Monitoring Console to identify the times during the day your data load is at it's highest and lowest. Use those numbers to determine the total capacity for your deployment.
Lighten the data load. What's the total and average indexing performance? Consult the resource usage dashboards to look for indexing pipelines bottlenecks.