Getting Data In

Is my current architectural design a legitimate deployment for a small Splunk Enterprise infrastructure?

horsefez
Motivator

Hi,

My company is deciding to use Splunk in a Small Enterprise Deployment.
I already read a bit about scaling, the infrastructure design, and the amount of components.

I'm assigned the task to think about and design our deployment.
So.... I want to ask if my thoughts so far make any sense.

My plan is to build an infrastructure that looks like the attached picture.
alt text

I would use a Heavy Forwarder in the deployment to filter data that is coming into the deployment before it gets indexed. I might not need this feature today, but maybe later.

Is this a legit deployment?
Is it ok if I configure the Universal Forwarders to send data to the HF first?

0 Karma
1 Solution

javiergn
Super Champion

Your design looks all right to me but there are lots of other things you need to consider, such as:

  • Number of final users (this will increase the load on search heads and therefore indexers)
  • Data you are ingesting every day
  • Resiliency: if your SH is down you are blind so what's your DR plan here? Same goes for the HF
  • Physical location of your data
  • etc

If your budget is limited and assuming you are indexing less than 200GB/day I would do the following:

  • Get rid of the HF. Your indexers can do the filtering too and they also provide resiliency. Your HF is a unique point of failure in your diagram
  • Go for a virtual Search Head (make sure you allocate enough CPU cores and memory) and use your virtual infrastructure to provide backup and DR for this component

Hope that helps.

Thanks,
J

View solution in original post

0 Karma

javiergn
Super Champion

Your design looks all right to me but there are lots of other things you need to consider, such as:

  • Number of final users (this will increase the load on search heads and therefore indexers)
  • Data you are ingesting every day
  • Resiliency: if your SH is down you are blind so what's your DR plan here? Same goes for the HF
  • Physical location of your data
  • etc

If your budget is limited and assuming you are indexing less than 200GB/day I would do the following:

  • Get rid of the HF. Your indexers can do the filtering too and they also provide resiliency. Your HF is a unique point of failure in your diagram
  • Go for a virtual Search Head (make sure you allocate enough CPU cores and memory) and use your virtual infrastructure to provide backup and DR for this component

Hope that helps.

Thanks,
J

0 Karma

horsefez
Motivator

Is it a viable strategy to buy an ESX-Server and run all the components on a virtual server infrastructure?

0 Karma

javiergn
Super Champion

Other concepts you might want to read about:

  • search head clustering
  • multisite indexer clustering
  • deployment server

This is all documented here, here and here.

0 Karma

horsefez
Motivator

Thank you all! 🙂

0 Karma
Career Survey
First 500 qualified respondents will receive a $20 gift card! Tell us about your professional Splunk journey.

Can’t make it to .conf25? Join us online!

Get Updates on the Splunk Community!

Can’t Make It to Boston? Stream .conf25 and Learn with Haya Husain

Boston may be buzzing this September with Splunk University and .conf25, but you don’t have to pack a bag to ...

Splunk Lantern’s Guide to The Most Popular .conf25 Sessions

Splunk Lantern is a Splunk customer success center that provides advice from Splunk experts on valuable data ...

Unlock What’s Next: The Splunk Cloud Platform at .conf25

In just a few days, Boston will be buzzing as the Splunk team and thousands of community members come together ...