Splunk Dev

Deploying more servers with less resources or less servers with more resources

santorof
Path Finder

I am planning deploying another Splunk environment. Trying to determine what would be more beneficial with more servers with near minimum resources or less servers with more resources
http://docs.splunk.com/Documentation/Splunk/6.5.2/Capacity/Referencehardware

I have the option of going with doubling my indexers(12) and having 16 cores and 32 ram per each or half the amount of indexers(6) with 32 cores and 32 ram. I do have a few sourcetypes that are searched upon more heavily than others.

If I were to spread the load of those heavy sourcetypes would it be more beneficial to search upon more indexers or have less indexers to search upon? Would it affect my search time to search upon more indexers for my data? Also not sure if I would have to increase the resources behind my cluster/deployment server with 12 indexers compared to 6.

Is it generally accepted that the less indexers you have with more resources is better than more indexers with lesser resources?

0 Karma

ddrillic
Ultra Champion

The direction of the industry is Hadoop which is more servers with less resources and less dependencies on one specific node. I find it a robust approach.

0 Karma

mattymo
Splunk Employee
Splunk Employee

what industry is that?

your procurement/ops teams are crying right now lol

1 horse sized duck, or 100 duck sized horses?

- MattyMo
0 Karma

mattymo
Splunk Employee
Splunk Employee

how many searchheads do you have?
how much overall data ingest do you do?
What about disk?

I would go with the 6 indexers, as even those specs are just ok (cores are looking good, need moar RAM) if you start to get into high data volumes. Production environments should do much better than just the bare minimum to avoid pain in the future.

- MattyMo
0 Karma

santorof
Path Finder

around 250GB daily with 200TB set aside to spread across my indexers.

Edit: We are planning for one search head with 16 cores 32 gb of ram but might plan to cluster two search heads together via captain

0 Karma

mattymo
Splunk Employee
Splunk Employee

250GB daily? with plans to get bigger? whats the max data ingest you could see before next budget cycle 😉
Any ITSI or ES in your future?

At that ingest rate I'd go down to 3 idx with 32/64. You want to try and stay in the sweet spot of like 100-200GB/day per indexer depending on what your plans are for the future.

Again, 32core 128GB RAM is more in the realm of where you want a prod deployment to be living in...you should be shooting to blow our minimum requirements out of the water

- MattyMo
0 Karma

santorof
Path Finder

We have Enterprise Security installed on our search head and the plan is to intake up to if not more 500GB/daily. The new search head plan is to give it 32 cores and 32GB RAM.

We also have site redundancy where I would be splitting my indexers against two different locations. So would 2 or 3 indexers per site at 32 cores and 64 gb of ram be ideal based upon my daily intake?

0 Karma

mattymo
Splunk Employee
Splunk Employee

you should really read the HIgh performance and Premium app sections of the guide.

http://docs.splunk.com/Documentation/Splunk/6.5.2/Capacity/Referencehardware

- MattyMo
0 Karma
Get Updates on the Splunk Community!

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...

New in Observability Cloud - Explicit Bucket Histograms

Splunk introduces native support for histograms as a metric data type within Observability Cloud with Explicit ...