Deployment Architecture

Anyone have experience with deployment servers in a kubernetes/openshift environment?

AHBrook
Path Finder

Hey everyone!

We're currently in the process of getting ready to deploy a Splunk Cloud instance to migrate our local on-prem version from. Currently, our environment is a hodge-podge of installs, including completely unmanaged universal forwarders, a couple heavy forwarder clusters, and so on. We also have resources both in our local datacenter and in various cloud providers. 

I've been of the thought for a while that we should toss the deployment servers into a container environment. I was curious if anyone had experience with doing this?

Here's the design I want to build towards:

  • Running at least two instances of Splunk Enterprise, so that we have redundancy and load balancing and can transparently upgrade
  • The instances would not have any indexer or search head functionality, per Splunk's best practices
  • Ideally, the instances would not have any web interfaces, because everything would be code managed
  • All the instances would be configured to talk up to the Splunk Cloud environment as part of their initial deploy
  • All of the instances would use a shared storage location for their apps, including self-configuration for anything beyond the initial setup. This shared storage location would be git-controlled.
  • In an ideal world, the individual Splunk components would not care which deployment server they talked to - they would just check in to a load balanced URI.

Now, I know this is massively over-engineering the solution. We've got a couple thousand potential endpoints to manage, so a single standalone deployment server would do the trick. But I want to try this route for two reasons. First, I think it will scale better - especially if I get it agnostic enough that we can use it to deploy to AWS or Azure and get cloud-local deployment servers. Second, and perhaps more importantly, I want to practice and stretch my skills with containers. I've already worked with our cloud team to build out a Splunk Connect for Kubernetes setup in order to monitor our pods and Openshift environment. I want to take this opportunity to learn.

Labels (4)
0 Karma
1 Solution

gcusello
SplunkTrust
SplunkTrust

Hi @AHBrook,

I suppose that you're speaking of a private cloud and not about Splunk Cloud.

Anyway, there isn't any problem to have DS in Cloud, even if i prefer to have Deployment Server in on on-prem system, mainly if you have more systems on prem than in your private cloud!

I'd choose the solution that permits to open less firewall connections between your systems and DS.

Anyway, ansering to your questions:

  • "Running at least two instances of Splunk Enterprise, so that we have redundancy and load balancing and can transparently upgrade":
    • if you're speaking of the DS, DS isn't a Single Point of Failure because your Splunk infrastructure can work also without DS, so you don't need two DSs and you can have only one both on-prem or in Cloud,
    • eventually, you could design to have two DSs, one on-prem and one in your private cloud managing the corrispective systems,
    • the two instances must be connected between them and the first manages both its clients and the second DS. 
  • The instances would not have any Indexer or Search Head functionality, for Splunk's best practices:
    • if your DS has to manage more than 50 clients it must have a dedicated system, in other words not shared with other roles.
  • Ideally, the instances would not have any web interfaces, because everything would be code managed
    • Correct: Usually, it's a best practice, for security reasons, after the implementation phase, disable web interfae of all systems (except obviously Search Heads!).
    • eventually DS could be an exception to this general rule, because there are some GUi features very useful for managing, but it isn't mandatory.
  • All the instances would be configured to talk up to the Splunk Cloud environment as part of their initial deploy:
    • the connections depend on your architecture: where are located Indexers, Master Node, Search Heads, Deployer and Deployment Server?
    • Anywhere they are located, it's important that all Splunk systems send their logs to Indexers.
  • All the instances would use a shared storage location for their apps, including self-configuration for anything beyond the initial setup. This shared storage location would be git-controlled:
    • I don't like shared storage for many reasons: at first indexers are slower and there isn't any advantage to use a shared storage,
    • this is an old approach abandoned by Splunk many releases ago (around 5.0): use a dedicated storage for system disks of each server and use the replication features of Splunk for Indexers and Search Heads.
  • In an ideal world, the individual Splunk components would not care which deployment server they talked to - they would just check in to a load balanced URI:
    • as I said, you don't need an always active DS, so you don't need two DSs and a Load Balancer.
    • In a Splunk architecture, Load Balancer (real of software or DNS) is mandatory only for Heavy Forwarders that has to ingest syslogs.

In addition, you spoke about Heavy Forwarders Cluster: there isn't any HF Cluster Feature in Splunk: it's a best practice to have redundant HFs that means two or more DSs but it isn't a Cluster and load balancing is managed by Splunk so you don't need Load Balancer for DSs.

About the problem that your DS has to manage more than 2000 clients, one DS can manage your clients withot problems, eventually  you could give more resources to your DS (24 CPUs instead of 12 and 24/32 GB RAM instead of 12) and eventually use more DS.

If you have more DSs, you have always to use the feature that there's one Main DS that manages its clients and the other DSs.

One final hint: if you have to manage a so large architecture, you need to engage a Splunk Architect for designing and managing it: this isn't a job for questions in Community, eventually ask to your Manager to pay for you an Splunk Architect Training, it's safer for your company!

Ciao.

Giuseppe

View solution in original post

gcusello
SplunkTrust
SplunkTrust

Hi @AHBrook,

I suppose that you're speaking of a private cloud and not about Splunk Cloud.

Anyway, there isn't any problem to have DS in Cloud, even if i prefer to have Deployment Server in on on-prem system, mainly if you have more systems on prem than in your private cloud!

I'd choose the solution that permits to open less firewall connections between your systems and DS.

Anyway, ansering to your questions:

  • "Running at least two instances of Splunk Enterprise, so that we have redundancy and load balancing and can transparently upgrade":
    • if you're speaking of the DS, DS isn't a Single Point of Failure because your Splunk infrastructure can work also without DS, so you don't need two DSs and you can have only one both on-prem or in Cloud,
    • eventually, you could design to have two DSs, one on-prem and one in your private cloud managing the corrispective systems,
    • the two instances must be connected between them and the first manages both its clients and the second DS. 
  • The instances would not have any Indexer or Search Head functionality, for Splunk's best practices:
    • if your DS has to manage more than 50 clients it must have a dedicated system, in other words not shared with other roles.
  • Ideally, the instances would not have any web interfaces, because everything would be code managed
    • Correct: Usually, it's a best practice, for security reasons, after the implementation phase, disable web interfae of all systems (except obviously Search Heads!).
    • eventually DS could be an exception to this general rule, because there are some GUi features very useful for managing, but it isn't mandatory.
  • All the instances would be configured to talk up to the Splunk Cloud environment as part of their initial deploy:
    • the connections depend on your architecture: where are located Indexers, Master Node, Search Heads, Deployer and Deployment Server?
    • Anywhere they are located, it's important that all Splunk systems send their logs to Indexers.
  • All the instances would use a shared storage location for their apps, including self-configuration for anything beyond the initial setup. This shared storage location would be git-controlled:
    • I don't like shared storage for many reasons: at first indexers are slower and there isn't any advantage to use a shared storage,
    • this is an old approach abandoned by Splunk many releases ago (around 5.0): use a dedicated storage for system disks of each server and use the replication features of Splunk for Indexers and Search Heads.
  • In an ideal world, the individual Splunk components would not care which deployment server they talked to - they would just check in to a load balanced URI:
    • as I said, you don't need an always active DS, so you don't need two DSs and a Load Balancer.
    • In a Splunk architecture, Load Balancer (real of software or DNS) is mandatory only for Heavy Forwarders that has to ingest syslogs.

In addition, you spoke about Heavy Forwarders Cluster: there isn't any HF Cluster Feature in Splunk: it's a best practice to have redundant HFs that means two or more DSs but it isn't a Cluster and load balancing is managed by Splunk so you don't need Load Balancer for DSs.

About the problem that your DS has to manage more than 2000 clients, one DS can manage your clients withot problems, eventually  you could give more resources to your DS (24 CPUs instead of 12 and 24/32 GB RAM instead of 12) and eventually use more DS.

If you have more DSs, you have always to use the feature that there's one Main DS that manages its clients and the other DSs.

One final hint: if you have to manage a so large architecture, you need to engage a Splunk Architect for designing and managing it: this isn't a job for questions in Community, eventually ask to your Manager to pay for you an Splunk Architect Training, it's safer for your company!

Ciao.

Giuseppe

AHBrook
Path Finder

Thanks for the detailed replies!

That's a good point about the redundancy aspect with the Deployment Server. I guess it makes sense that I shouldn't have to run more than 1 at any given time, unless we go 1 cloud, 1 on-prem.  I'd still like them to share a code base / setup as much as possible,  just to make updating/maintenance as easy as possible.

In the environment we are building out, the indexers are search heads are with Splunk Cloud. The Deployment Server and Heavy Forwarders are going to be managed by us and talk to the Splunk Cloud environment.

I apologize for not being clear about the "shared storage." I was referring to the splunk/etc/deployment-apps folder. I recognize that sharing storage for indexers is a bad idea(tm).

And yeah, the idea with the deployment servers having a load balancer would be for the initial check in of deployed components. I haven't gotten this far yet, but I'd like to make it so that whenever a Splunk UF or HF is deployed in our environment, it knows where to look to register with our Splunk instance. I don't know yet if that is possible to have them check into our Cloud instance, or into our Deployment servers directly.

The clarification on Heavy Forwarders is well taken. 🙂 I got my terms mixed up a bit. The plan we have is for horizontal scaling, where our networking syslogs and other systems that don't have local log storage would not have to care which node they talk to.

 

The architecture element is also well taken. I've so far only gotten my Splunk Power User certificate, but I have taken Splunk System and Data Administraton (I need to get on taking that cert exam here once things calm down). I hope to get Architect someday. My coworker has also taken Splunk Cloud Administrator courses. We are pretty well engaged with Splunk when it comes to architecture and planning and setting up our environment, but again this is a bit of a side project / proof of concept for me. 🙂

 

 

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @AHBrook,

as I said, my hont is to have only one DS, at least one on-prem ona one in cloud, but in this case, you have to configure the second one as a slave of the first.

About the problem of a new Forwarder, I hint to create a dedicated App (called e.g. TA_Forwarders) that contains only three files:

  • app.conf: describing the app,
  • deploymentclient.conf: addressing the DS,
  • outputs.conf: addressing the IDXs.

In this way, when you install a new forwarder, you have only to copy this app in $SPLUNK_HOME/etc/apps and restart local Splunk, so you'll have all the correct configurations by the DS and you don't need any shared folder for configurations.

HFs orizzontal scaling is always guaranteed: you can anytime add a new HF or add more resources to the existing ones.

As I said, you have a very large architecture that requires a Splunk Architect review, eventully at the beginning an external one and then you!

See next time!

Please accept one answer for the other people of Community

Ciao and happy splunking

Giuseppe

P.S.: Karma Points are appreciated 😉

Get Updates on the Splunk Community!

Earn a $35 Gift Card for Answering our Splunk Admins & App Developer Survey

Survey for Splunk Admins and App Developers is open now! | Earn a $35 gift card!      Hello there,  Splunk ...

Continuing Innovation & New Integrations Unlock Full Stack Observability For Your ...

You’ve probably heard the latest about AppDynamics joining the Splunk Observability portfolio, deepening our ...

Monitoring Amazon Elastic Kubernetes Service (EKS)

As we’ve seen, integrating Kubernetes environments with Splunk Observability Cloud is a quick and easy way to ...