Deployment Architecture

Azure Design Question

JAtkins
New Member

Greetings,  I have an architectural question about an on-prem to Azure/AWS.  It is a complex question, so I'll try to keep it simple.  Assume you have a very large Splunk footprint 20+ indexers with 48 physical cores, Search heads with 30 cores, etc...  

Do you need to create a 1:1 match in a cloud provider or can you use smaller hardware and scale out to save money?  Physical CPUs and vCPUs aren't equivalent so the hardware for a 1:1 match is more expensive.  

When I look at Splunk's proposed Azure architecture they use much smaller VMs for indexers (8 core) and scale out.  Looking for thoughts/advice on whether you would move these 1:1 into cloud or rearchitect for a smaller VM size.  I'm leaning toward rearchitecting as cost is a big component of this.  I'm not sure how to equate how many 48 core indexers (on prem) with smaller 8 core vcpu vms though.  I dont think it would be 6 Azure VMs to 1 48 core physical box.  Any advice/thoughts are appreciated.  

Labels (1)
0 Karma

esix_splunk
Splunk Employee
Splunk Employee

This is really a great question, with many facets to contemplate!

 

First I'll say, that preference is to go with AWS or GCP and use Smartstore based architectures until we release blob-storage compatible object storage compatibility.

 

With that being said, attached storage isn't cheap in Azure/AWS/GCP etc. So as you build out your indexing fleet typically we want to consider this.. So hot/warm, cold, retention period, replication factor and search factor. These all will play a part in that architecture decision.

From the technical point of view, 8 vcores is well be low our recommended spec for a deployment at your size. And I would most definitely not deploy that ( 8physical/16vcores yes no problem..) If you follow our Azure Best Practices, we do recommend Standard_DS15_v2 instances for larger deployments. ( The small instances we do recommend the lower 8 core boxes.. again do avoid at your scale!)

If the cost wasn't an issue, I would look at deploying fewer larger boxes and storage. Depending on search and indexing requirements, you might be able to get from 20 smaller boxes down to 8 to 10 beefier ones.  And assuming even data distribution and performance disks, search and indexing shouldn't be a concern. *

If you're on AWS, you can go smartstore and I3en instances that have a larger compute footprint and not be too concerned about storage as we offload to s3. Any additional compute or cache storage can be added horizontally at the indexing tier.

This is a great conversation that your account team should be able to faciliate with one of the platform architects at Splunk. You should definitely engage to go through this and come up with a best solution as what you may have now has a lot of questions..

 

 

* Not knowing indexing or search workloads, the above is just purely a high level design conversation. 

 

0 Karma