Deployment Architecture

Best practice to deploy client apps to Splunk servers?

mitag
Contributor

What are the detailed steps to deploy client apps (mainly inputs.conf) to Splunk servers (indexers, SHs, deployment servers, monitor, any other Splunk server roles)?

Or (another way to ask the same question): best practice to be able to search the same logs (source types) that we collect / forward on all the hosts in our environment? (And keep these sourcetypes in sync across all instance types?)

Reason I ask:

"About deployment server and forwarder management" says this among other things:

Indexer clusters

Do not use deployment server or forwarder management to manage configuration files across peer nodes (indexers) in an indexer cluster. Instead, use the configuration bundle method. You can, however, use the deployment server to distribute updates to the master node, which then uses the configuration bundle method to distribute them to the peer nodes. See "Update common peer configurations" in the Managing Indexers and Clusters of Indexers manual.

This seems to imply the steps are perhaps these?

  1. Manually copy (and then manually update when there are changes) deployments apps for clients to the Master Apps folder on the DS?
  2. Distribute the configuration bundle?

If that's the best practice - is there a way to sync client deployment apps with the Master Apps on Splunk servers?

Another option I keep hearing about is to simply install a SUF on each Splunk server. But: is this a good practice? Does it duplicate forwarding, say, of Splunk's internal logs (such as index="_*")?

Thank you!

P.S. In your answers please either list steps that can be followed - or ask questions you need answered before you can provide those steps - or point me to resources where those steps are clearly listed. I get it that it's complex - however explaining the why of this complexity isn't a substitute for listing steps for at least some environments.

P.P.S. My environment: MN, SH, DS, 3 clustered indexers, MC, all running CentOS 7.7. There is also a 2nd site but let's keep simple for now. Splunk Enterprise v.7.1.2, to be upgraded to 8.0.1 within a few weeks.

0 Karma

woodcock
Esteemed Legend

I use the Deployment Server to manage the files for both deployer and cluster master nodes. The trick is to know this (read both answers):
https://answers.splunk.com/answers/107452/targetrepositorylocation-in-serverclass-conf-global-only-o...

0 Karma

mitag
Contributor

Appreciate the answer - yet it flew above my head and doesn't appear to address this specific request:

P.S. In your answers please either list steps that can be followed - or ask questions you need answered before you can provide those steps - or point me to resources where those steps are clearly listed.

In addition, the answer seems to list the DS and CM type roles - but not others - i.e. this wouldn't fully answer my question even if it listed the steps?

It seems to me possible answers are these two, considering our use of deployment servers to push our forwarder configurations:

  • Just install SUF and use DS to sync the configuration. Drawback: some duplicate data, such as some or all duplicate index="_internal" sourcetypes.
  • "Distribute the configuration bundle" method. Drawbacks: (1) no clear documentation (detailed steps) on how to get it to work for common environment types; (2) apparently no way to sync client and Master apps; deployment servers may still have to be managed / synced manually.

Does this list sound right?

0 Karma

richgalloway
SplunkTrust
SplunkTrust

This is a many-layered question.

First, not all configurations get distributed to all instance types. In particular (since you mentioned it) inputs.conf should be installed only on forwarders.

Technically, only indexers are "search peers", however all instances types are search peers to the Monitoring Console.

As you've discovered, Splunk has a few ways to install apps. The best method(s) depend on your Splunk environment. For instance, independent indexers can get their apps from the Deployment Server (DS), but clustered indexers must get their apps from the Master Node (MN). It is possible for Master Node to get apps from the DS, but that's an advanced topic. Similarly, independent search heads can get apps from the DS, but clustered SHs get them from the SHC Deployer. I believe it's possible for the Deployer to get apps from the DS.

Before you think about having a central installation that cascades to Deployer and MN, keep in mind my comment about what gets installed where. The same app may get installed in all tiers, but it will be slightly different at each one. Forwarders get inputs.conf and maybe props and transforms; indexers get props and transforms, but not bin or samples; and SHs get props, transforms, and perhaps other files. The exact mix depends on the app.

This is why having multiple deployment mechanisms can actually simplify matters. Apps can be broken up as necessary and the required parts put in the appropriate places for distribution to the instances where they are needed. Yes, it's not as simple as it perhaps could be, but it's what we have at the moment.

---
If this reply helps you, Karma would be appreciated.
0 Karma

mitag
Contributor

This is a many-layered question.

What is the best way to simplify it? 🙂 Would detailing my configuration do? (MN, SH, DS, 3 clustered indexers, MC. There is also a 2nd site but let's keep simple for now. Splunk Enterprise v.7.1.2, to be upgraded to 8.0.1 within a few weeks.) If there isn't one "best practice", possible to list top 3, with the steps?

Because I am fairly new to Splunk - but tasked with maintaining it and making it useful for my team - my priorities are two:

  • make it as simple as possible (i.e. steps that are easy to follow) should I need to replicate or reproduce the environment
  • do least possible damage

Thank you.

0 Karma

richgalloway
SplunkTrust
SplunkTrust

I think it best to work with each deployment method separately. Trying to merge ("simplify") them will only create complications.

One Best Practice I should mention is to avoid putting configuration changes in $SPLUNK_HOME/etc/system/local. Local changes should be done in an app, even if it means creating a local app for them. For example, create myorg_deploymentclient to hold the deploymentclient.conf file for your forwarders and myorg_all_indexes to hold indexes.conf for your indexers.

---
If this reply helps you, Karma would be appreciated.
0 Karma

mitag
Contributor

Sorry Rich your comment (and your answer) are flying high over my head. I don't understand what you're saying or how to translate it to instructions that can be followed. I need the steps that I could follow. If the steps are specific to my environment, then I'll be happy to answer any questions about it.

0 Karma

richgalloway
SplunkTrust
SplunkTrust

1: Read the documentation. Start with Inherit a Splunk Enterprise Deployment (https://docs.splunk.com/Documentation/Splunk/8.0.1/InheritedDeployment/Introduction) then move on to the Admin Manual (https://docs.splunk.com/Documentation/Splunk/8.0.1/Admin/Howtousethismanual) and Distributed Deployment Manual (https://docs.splunk.com/Documentation/Splunk/8.0.1/Deploy/Distributedoverview). These will help you understand the basics of how the various Splunk instances interact.

2: Insist your employer send you to the Splunk System Administration class. It will cover the basics as well as much of my answer and comments.

3: Understand that a multi-site clustered Splunk environment is not a simple thing to run. While there may be ways to make it easier, it's best to understand how things work the way they are intended before you try creating shortcuts.

This isn't a good forum for learning Splunk from scratch. It's best for solving problems and getting answers to specific questions.

Yes, much of what you need to know is specific to your environment, but there's a lot that applies to all environments. Things like terminology (see https://docs.splunk.com/Splexicon), what each instance type does, how distributed search works, how to create app, and more. The documents above should get you started.

---
If this reply helps you, Karma would be appreciated.
0 Karma
Get Updates on the Splunk Community!

New Case Study Shows the Value of Partnering with Splunk Academic Alliance

The University of Nevada, Las Vegas (UNLV) is another premier research institution helping to shape the next ...

How to Monitor Google Kubernetes Engine (GKE)

We’ve looked at how to integrate Kubernetes environments with Splunk Observability Cloud, but what about ...

Index This | How can you make 45 using only 4?

October 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with this ...