Deployment Architecture

Question regarding Search head clustering



We have a small Splunk environment with one search head and one indexer, both in the same server box. Due to the increasing number of usage of Splunk recently, we are seeing a few performance issues(mainly with reports and alerts). Reports are taking a lot of CPU while generating and this is affecting the concurrent searches. Now the idea is to have a server just for the purpose of reports. Increasing the CPU of the existing server is not an option.

1) Should we add a search head just for reports?
2) Can we have 2 search heads with single 1 URL as the URL naming is standard across the organization?

Is there any other better options.


Splunk Employee
Splunk Employee

SH clustering requires at least 3 search-heads + one separate instance for your deployer ( to deploy the apps to them, a VM could to the trick)

The goal is to increase the overall number of cores accessible over the cluster.
The advantage of the SHC, is that the configuration and kvstore replicates. You can search in 1 SH, and later see the job result on another.

All SH have an unique host name, but will pretend to all have the same GUID internally (to use the same bundles, and accelerations)
You can still login in the UI to a specific instance. But to spread the user's login, you will need a proxy/load balancer in front of the UI.
You can designate instances to be job execution instance only (for scheduled searches by example). If you do so, it could be a strategy to use the load balancer to send the user logins to the others SH.

If you do not want a SHC, the other route is to have 2 SH, but it's up to you to keep them more or less configured the same, and keep your users on each specific SH, with no easy possibility to share results.
Also If you use accelerations (report or datamodels), it will need to be accelerated TWICE, and case extra load on the indexers.

Ultra Champion

When you start to have performance issues, there are many things to consider.

You could add a second search head, or even a search head cluster, but if your indexers are under performing, it will not make any difference, and could even make things worse.

Are you able to some specifications and idea of daily data volume so we can make more of an educated response.
num of cores, memory, Storage Type (local/remote, ssd/spinning disk), Storage Volume and daily GB will be useful.

0 Karma


Here are the server specifications

CPU cores - 4(I know this should be at least 12 but unfortunately in our case, it is not possible)
Memory - 200 GB
daily Splunk logging less than 10GB
When the reports are running CPU usage is around 90%, otherwise less than 10%.

From the documentation, I see that 1 search head can manage up to 100GB/day. So I don't think the indexer is underperforming.

0 Karma


You can barely run a single instance on 4 cores reliably. Definitely not two. At 10GB/day, unless you need HA, you really shouldn't need more than one instance/server but you absolutely must get more cores.

If cost is a concern, see about reducing some of that memory usage. No way you need 200GB for this small of a deployment and RAM is expensive.

0 Karma


because you have 4 cores, you have very limited concurrent searches available
ill recommend to look at the long running searches and see if you can improve their performance.
also try and spread the scheduling, so not all searches will be triggered at the same time.
for example, if you have 60 searches that needs to run every hour, it might be better to have 1 search for every minute of the hour and not 15 searches for every 15 minutes

0 Karma