Splunk Enterprise

How do you maintain your splunk config?

muebel
SplunkTrust
SplunkTrust

Splunk is so nice, they made config management systems thrice! The index manager, deployment server, and SHC deployer let you centralize configuration which can then be pushed to (or pulled by) the rest of your splunk infrastructure.

But how do you manage the config on these configuration management systems? Is it simply someone SSH'ing into them and updating config files? Do you do something more sophisticated than that?

Any and all answers welcome! The most charming answer will be selected as the best answer!

Labels (3)
0 Karma
1 Solution

TRex
SplunkTrust
SplunkTrust

Brief Summary

Git...Git...Git and Ansible.

We've got multiple git repositories in which changes are proposed, tracked, and codified, which are then distributed by a mixed means of either git sync or Ansible push.

This helps our team of multiple engineers all have a hand the work without stepping on each other. Combined with a git-shell profile, we have visibility into when local changes have been made, and educate us on what files (*cough* app.conf *cough) are created automatically.

 

In Depth:

In an attempt to standardize our TA and app experience, we've built an architecture around the utilization of a DS.

The Cluster Manager, Search Head Deployers, Single Purpose Distributed search heads, and modular heavy forwarders all check into a centralized enterprise deployment server. That deployment server pushes out apps to the top tier management systems into their respective directories, like the Search Head Deployer's shcluster/apps. We rely on Splunk's tiered DS options to make sure everything runs properly without unnecessary restarts.

This allows us to update a single repository with TAs for the environment and distribute them to the search and indexing tier without having to track and remember that "Oh the indexer version of the Cisco TA is located and maintained here and the search version is over here, and they have slightly different regex for this new field extraction the customer wanted.

We do something similar with our deployment servers for UFs.

 

In the end, we maintain 4 repositories, and utilize Ansible to push out a majority of our configurations. Instead of the 8 or 9 repositories if we handled this by tier.

 

View solution in original post

PickleRick
SplunkTrust
SplunkTrust

There are many different approaches possible. The most "process-oriented" would of course be that there is a change proposal which is submitted somewhere (probably a git branch), it's getting tested on pre-prod environment, verified, accepted, merged into main branch and from there pushed to the components using some form of orchestration tool (ansible? puppet? set of own hand-made tools?).

But there are some other approaches that can be useful in some circumstances. As I often pointed out - central distribution of apps has its drawbacks - among them is the possibility of running any executable with the effective rights of the user running the splunk component (which in case of windows forwarders is often Local System!) with no obvious audit means. So I have some environments from which I simply copy the configs every now and then and push them to a repository for history.

There is no one-size-fits-all approach because the environments differ and requirements differ as well.

But any practical approach should IMO involve a robust and easily accessible history of configs. And give you the ability to roll back to a particular point in the past.

0 Karma

mmccul
SplunkTrust
SplunkTrust

git + POSIX sh + ssh forced commands

Each app is its own repo in git.  If I'm lucky, I'm using gitlab which allows me to have folder hierarchy to identify apps by function (e.g. apps for the CM, apps for the SHD, apps for the DS to push to the UFs).  Then, I built a shell script that is attached to a forced command in ~splunk/.ssh/authorized_keys so when you authenticate to that account, you pass the name of a repo.  It validates that is a valid repo, git pulls the main branch of that repo, then uses a token attached to a local Splunk user that only has the deployment permissions to deploy the update (e.g. `splunk reload deploy-server` or the SHD deployment command sequence).

Repos intended to be updated by non-admins, I lock the repo so it can't get administrative config files (e.g. authentication.conf) by push rules.

0 Karma

venky1544
Builder

Hi @muebel 

Below is a reference link based on the same topic 

https://community.splunk.com/t5/Getting-Data-In/Best-way-to-use-git-for-source-version-control-with-...

i was using the suggestions in the link for my use case and still working on it 

Hope it helps

TRex
SplunkTrust
SplunkTrust

Brief Summary

Git...Git...Git and Ansible.

We've got multiple git repositories in which changes are proposed, tracked, and codified, which are then distributed by a mixed means of either git sync or Ansible push.

This helps our team of multiple engineers all have a hand the work without stepping on each other. Combined with a git-shell profile, we have visibility into when local changes have been made, and educate us on what files (*cough* app.conf *cough) are created automatically.

 

In Depth:

In an attempt to standardize our TA and app experience, we've built an architecture around the utilization of a DS.

The Cluster Manager, Search Head Deployers, Single Purpose Distributed search heads, and modular heavy forwarders all check into a centralized enterprise deployment server. That deployment server pushes out apps to the top tier management systems into their respective directories, like the Search Head Deployer's shcluster/apps. We rely on Splunk's tiered DS options to make sure everything runs properly without unnecessary restarts.

This allows us to update a single repository with TAs for the environment and distribute them to the search and indexing tier without having to track and remember that "Oh the indexer version of the Cisco TA is located and maintained here and the search version is over here, and they have slightly different regex for this new field extraction the customer wanted.

We do something similar with our deployment servers for UFs.

 

In the end, we maintain 4 repositories, and utilize Ansible to push out a majority of our configurations. Instead of the 8 or 9 repositories if we handled this by tier.

 

muebel
SplunkTrust
SplunkTrust

oooo tiered DS

do you do any checking or linting on the PRs?

0 Karma

TRex
SplunkTrust
SplunkTrust
All the repos require review for any PRs that get applied, and the PR creator can't approve their own. We don't do any checking outside of that, other than maintaining a reduced development environment that uses similar mechanisms for testing.
0 Karma
Get Updates on the Splunk Community!

Routing logs with Splunk OTel Collector for Kubernetes

The Splunk Distribution of the OpenTelemetry (OTel) Collector is a product that provides a way to ingest ...

Welcome to the Splunk Community!

(view in My Videos) We're so glad you're here! The Splunk Community is place to connect, learn, give back, and ...

Tech Talk | Elevating Digital Service Excellence: The Synergy of Splunk RUM & APM

Elevating Digital Service Excellence: The Synergy of Real User Monitoring and Application Performance ...