Deployment Architecture

What specs are needed for a deployment server to manage 1500 - 2000 forwarders in a multisite indexer clustering environment?

thomas_forbes
Communicator

Hello,

I am assembling a multisite clustered Splunk implementation. I am having a little trouble finding what sufficient specs are needed for a deployment server that will manage between 1500 - 2000 clients between 3 sites. Please advise on the CPU, memory, and storage.

Thank you,
Tom Forbes

1 Solution

javiergn
Super Champion

Hi,

I did struggle to find any recommended specs so I'll tell you what I did:

  • 1700 clients
  • 2 sites
  • Splunk 6.2
  • VM with 4 cores, 4GB RAM, operating system + 50GB free, 1 Gbps network
  • Operating system: Windows 2K12 initially but moved to Red Hat after noticing deployment to Linux clients wasn't working

Another piece of advise. We are keeping all our deployment configs in SVN.

Hope that helps.

View solution in original post

javiergn
Super Champion

Hi,

I did struggle to find any recommended specs so I'll tell you what I did:

  • 1700 clients
  • 2 sites
  • Splunk 6.2
  • VM with 4 cores, 4GB RAM, operating system + 50GB free, 1 Gbps network
  • Operating system: Windows 2K12 initially but moved to Red Hat after noticing deployment to Linux clients wasn't working

Another piece of advise. We are keeping all our deployment configs in SVN.

Hope that helps.

thomas_forbes
Communicator

Also, please advise on any additional information you think as relevant as it pertains to this question.

0 Karma
Get Updates on the Splunk Community!

New Case Study Shows the Value of Partnering with Splunk Academic Alliance

The University of Nevada, Las Vegas (UNLV) is another premier research institution helping to shape the next ...

How to Monitor Google Kubernetes Engine (GKE)

We’ve looked at how to integrate Kubernetes environments with Splunk Observability Cloud, but what about ...

Index This | How can you make 45 using only 4?

October 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with this ...