We just started using Splunk within Azure and spun up two standard_a4 machines to serve as our indexers. However, I'm noticing that the indexers are just getting slammed with searches from a specific search head in our environment. The search head is a physical server with 32 cores and 32GB of memory, while the Azure indexers we deployed as a pair only have 16GB of ram and 8 cores each giving us a total of 16 cores and 32GB of memory.
So the question is, can this single 32 core search head be the reason why both of these indexers are getting slammed? Would we have to spin up an additional two indexers to satiate the needs of this single search head of 32 cores? Should I adjust limits.conf to limit this search head to 8-16 cores? Any advice would be helpful. We are using raid 0 with the disks and are meeting the iops requirements by Splunk, so I don't think IO is a concern, mostly CPU since there are some CPU contention messages in the splunkd.log.
Your indexers are below reference spec wrt core count, as a first cheap measure I'd double the indexers' core count.
As for the broader question, yes - one searchhead can saturate many many indexers, depending on the type of searches run and many other factors.
You should probably add more indexers to accommodate the heavy search load (more indexers = more cores). While the search heads and the resources on those host are used to handle user search requests and users in general, the indexers are actually performing the searches for the data(remember, the data resides on the indexer and that's also why IOPS are important for indexers: reads and writes). So every search executed will result in one core being used on the indexer as well as resources on the search head for the duration of the search job. See http://docs.splunk.com/Documentation/Splunk/6.4.1/Capacity/Accommodatemanysimultaneoussearches.
Thanks, guys, I figured so, just wanted a second opinion.