Getting Data In

Is my current architectural design a legitimate deployment for a small Splunk Enterprise infrastructure?

horsefez
Motivator

Hi,

My company is deciding to use Splunk in a Small Enterprise Deployment.
I already read a bit about scaling, the infrastructure design, and the amount of components.

I'm assigned the task to think about and design our deployment.
So.... I want to ask if my thoughts so far make any sense.

My plan is to build an infrastructure that looks like the attached picture.
alt text

I would use a Heavy Forwarder in the deployment to filter data that is coming into the deployment before it gets indexed. I might not need this feature today, but maybe later.

Is this a legit deployment?
Is it ok if I configure the Universal Forwarders to send data to the HF first?

0 Karma
1 Solution

javiergn
Super Champion

Your design looks all right to me but there are lots of other things you need to consider, such as:

  • Number of final users (this will increase the load on search heads and therefore indexers)
  • Data you are ingesting every day
  • Resiliency: if your SH is down you are blind so what's your DR plan here? Same goes for the HF
  • Physical location of your data
  • etc

If your budget is limited and assuming you are indexing less than 200GB/day I would do the following:

  • Get rid of the HF. Your indexers can do the filtering too and they also provide resiliency. Your HF is a unique point of failure in your diagram
  • Go for a virtual Search Head (make sure you allocate enough CPU cores and memory) and use your virtual infrastructure to provide backup and DR for this component

Hope that helps.

Thanks,
J

View solution in original post

0 Karma

javiergn
Super Champion

Your design looks all right to me but there are lots of other things you need to consider, such as:

  • Number of final users (this will increase the load on search heads and therefore indexers)
  • Data you are ingesting every day
  • Resiliency: if your SH is down you are blind so what's your DR plan here? Same goes for the HF
  • Physical location of your data
  • etc

If your budget is limited and assuming you are indexing less than 200GB/day I would do the following:

  • Get rid of the HF. Your indexers can do the filtering too and they also provide resiliency. Your HF is a unique point of failure in your diagram
  • Go for a virtual Search Head (make sure you allocate enough CPU cores and memory) and use your virtual infrastructure to provide backup and DR for this component

Hope that helps.

Thanks,
J

0 Karma

horsefez
Motivator

Is it a viable strategy to buy an ESX-Server and run all the components on a virtual server infrastructure?

0 Karma

javiergn
Super Champion

Other concepts you might want to read about:

  • search head clustering
  • multisite indexer clustering
  • deployment server

This is all documented here, here and here.

0 Karma

horsefez
Motivator

Thank you all! 🙂

0 Karma
Get Updates on the Splunk Community!

New in Observability - Improvements to Custom Metrics SLOs, Log Observer Connect & ...

The latest enhancements to the Splunk observability portfolio deliver improved SLO management accuracy, better ...

Improve Data Pipelines Using Splunk Data Management

  Register Now   This Tech Talk will explore the pipeline management offerings Edge Processor and Ingest ...

3-2-1 Go! How Fast Can You Debug Microservices with Observability Cloud?

Register Join this Tech Talk to learn how unique features like Service Centric Views, Tag Spotlight, and ...