I have a question about using search-head clustering. If it can truly use "commodity" hardware, is there any reason that I can't cluster together a bunch of 4 cpu servers, to meet my search requirements? I find that Splunk uses the term "commodity" hardware very loosely. Why couldn't I have 10 servers, for example, providing the search-head clustering capability?
You can cluster search heads as long they satisfy the minimum Splunk Enterprise system requirements.
That's not really an answer, and certainly not the one that I get from prof services. By itself, a 4 cpu server for a search-head should be fine - it depends on the number of searches that are performed on it. So, if I scale it out, would it work? If not, why not?
So, what is your actual question then? Is it about scaling of Search Head Clustering or whether or not you can cluster 10 servers or are you looking for advice on whether clustering 10 4CPU machines is a good idea? They're all completely different questions, but i'll address them here for you:
- Yes, SHC scales
- Yes, SHC will work on 10 machines
- Creating a 10 node cluster of 4CPU machines is not necessarily a good idea. 4CPU machines are really difficult to find nowadays and therefore they're not commodity. You are more likely to find 12, 16 CPU machines than anything else in today's market. Chances are you're referring to VMs, in which case you're much better off with a lower number of machines with more cores than the opposite. Anytime you introduce a overlay mechanism, such as clustering, you're inevitably paying one way or another with overhead - nothing is free - so, the fewer machines that you can use that satisfy your requirement, the better. If you have 10 4CPU VMs you'd have a better experience if you make them into 5 8CPU VMs.
Thanks. Yes, that's the type of situation that I'm trying to understand. I am looking at using vm's instead of physical servers, and scaling them horizontally.