Getting Data In

Is my current architectural design a legitimate deployment for a small Splunk Enterprise infrastructure?

horsefez
Motivator

Hi,

My company is deciding to use Splunk in a Small Enterprise Deployment.
I already read a bit about scaling, the infrastructure design, and the amount of components.

I'm assigned the task to think about and design our deployment.
So.... I want to ask if my thoughts so far make any sense.

My plan is to build an infrastructure that looks like the attached picture.
alt text

I would use a Heavy Forwarder in the deployment to filter data that is coming into the deployment before it gets indexed. I might not need this feature today, but maybe later.

Is this a legit deployment?
Is it ok if I configure the Universal Forwarders to send data to the HF first?

0 Karma
1 Solution

javiergn
Super Champion

Your design looks all right to me but there are lots of other things you need to consider, such as:

  • Number of final users (this will increase the load on search heads and therefore indexers)
  • Data you are ingesting every day
  • Resiliency: if your SH is down you are blind so what's your DR plan here? Same goes for the HF
  • Physical location of your data
  • etc

If your budget is limited and assuming you are indexing less than 200GB/day I would do the following:

  • Get rid of the HF. Your indexers can do the filtering too and they also provide resiliency. Your HF is a unique point of failure in your diagram
  • Go for a virtual Search Head (make sure you allocate enough CPU cores and memory) and use your virtual infrastructure to provide backup and DR for this component

Hope that helps.

Thanks,
J

View solution in original post

0 Karma

javiergn
Super Champion

Your design looks all right to me but there are lots of other things you need to consider, such as:

  • Number of final users (this will increase the load on search heads and therefore indexers)
  • Data you are ingesting every day
  • Resiliency: if your SH is down you are blind so what's your DR plan here? Same goes for the HF
  • Physical location of your data
  • etc

If your budget is limited and assuming you are indexing less than 200GB/day I would do the following:

  • Get rid of the HF. Your indexers can do the filtering too and they also provide resiliency. Your HF is a unique point of failure in your diagram
  • Go for a virtual Search Head (make sure you allocate enough CPU cores and memory) and use your virtual infrastructure to provide backup and DR for this component

Hope that helps.

Thanks,
J

0 Karma

horsefez
Motivator

Is it a viable strategy to buy an ESX-Server and run all the components on a virtual server infrastructure?

0 Karma

javiergn
Super Champion

Other concepts you might want to read about:

  • search head clustering
  • multisite indexer clustering
  • deployment server

This is all documented here, here and here.

0 Karma

horsefez
Motivator

Thank you all! 🙂

0 Karma
Career Survey
First 500 qualified respondents will receive a $20 gift card! Tell us about your professional Splunk journey.

Can’t make it to .conf25? Join us online!

Get Updates on the Splunk Community!

Take Action Automatically on Splunk Alerts with Red Hat Ansible Automation Platform

 Are you ready to revolutionize your IT operations? As digital transformation accelerates, the demand for ...

Calling All Security Pros: Ready to Race Through Boston?

Hey Splunkers, .conf25 is heading to Boston and we’re kicking things off with something bold, competitive, and ...

Beyond Detection: How Splunk and Cisco Integrated Security Platforms Transform ...

Financial services organizations face an impossible equation: maintain 99.9% uptime for mission-critical ...