Getting Data In

Is my current architectural design a legitimate deployment for a small Splunk Enterprise infrastructure?

horsefez
Motivator

Hi,

My company is deciding to use Splunk in a Small Enterprise Deployment.
I already read a bit about scaling, the infrastructure design, and the amount of components.

I'm assigned the task to think about and design our deployment.
So.... I want to ask if my thoughts so far make any sense.

My plan is to build an infrastructure that looks like the attached picture.
alt text

I would use a Heavy Forwarder in the deployment to filter data that is coming into the deployment before it gets indexed. I might not need this feature today, but maybe later.

Is this a legit deployment?
Is it ok if I configure the Universal Forwarders to send data to the HF first?

0 Karma
1 Solution

javiergn
Super Champion

Your design looks all right to me but there are lots of other things you need to consider, such as:

  • Number of final users (this will increase the load on search heads and therefore indexers)
  • Data you are ingesting every day
  • Resiliency: if your SH is down you are blind so what's your DR plan here? Same goes for the HF
  • Physical location of your data
  • etc

If your budget is limited and assuming you are indexing less than 200GB/day I would do the following:

  • Get rid of the HF. Your indexers can do the filtering too and they also provide resiliency. Your HF is a unique point of failure in your diagram
  • Go for a virtual Search Head (make sure you allocate enough CPU cores and memory) and use your virtual infrastructure to provide backup and DR for this component

Hope that helps.

Thanks,
J

View solution in original post

0 Karma

javiergn
Super Champion

Your design looks all right to me but there are lots of other things you need to consider, such as:

  • Number of final users (this will increase the load on search heads and therefore indexers)
  • Data you are ingesting every day
  • Resiliency: if your SH is down you are blind so what's your DR plan here? Same goes for the HF
  • Physical location of your data
  • etc

If your budget is limited and assuming you are indexing less than 200GB/day I would do the following:

  • Get rid of the HF. Your indexers can do the filtering too and they also provide resiliency. Your HF is a unique point of failure in your diagram
  • Go for a virtual Search Head (make sure you allocate enough CPU cores and memory) and use your virtual infrastructure to provide backup and DR for this component

Hope that helps.

Thanks,
J

0 Karma

horsefez
Motivator

Is it a viable strategy to buy an ESX-Server and run all the components on a virtual server infrastructure?

0 Karma

javiergn
Super Champion

Other concepts you might want to read about:

  • search head clustering
  • multisite indexer clustering
  • deployment server

This is all documented here, here and here.

0 Karma

horsefez
Motivator

Thank you all! 🙂

0 Karma
Get Updates on the Splunk Community!

Monitoring MariaDB and MySQL

In a previous post, we explored monitoring PostgreSQL and general best practices around which metrics to ...

Financial Services Industry Use Cases, ITSI Best Practices, and More New Articles ...

Splunk Lantern is a Splunk customer success center that provides advice from Splunk experts on valuable data ...

Splunk Federated Analytics for Amazon Security Lake

Thursday, November 21, 2024  |  11AM PT / 2PM ET Register Now Join our session to see the technical ...