Knowledge Management

Best Practices For Syncing Knowledge Objects on Standalone Search Heads

TheColorBlack
Path Finder

Evening Splunk community,

My organization practices Blue / Green data-centers and requires us to switch production data centers every quarter. 

In my environment I manage two standalone Search Heads. One Search Head in each data-center, separated by region. I'm trying to determine a clean solution for keeping user knowledge artifacts (saved searches, reports, alerts..ect) synced across the two Search Heads without having to implement Search Head Cluster Replication.

Does anyone have any tips, advice, or general best practices when it comes to keeping knowledge objects synced between two or more standalone Search Heads? I've read a few forum posts that cover this topic and I've detailed some of the solutions I'm brainstorming, but wanted to get everyone's opinion before I start down the wrong path.

For starters I believe the knowledge artifacts on the Search Head reside under the following directories. Not including the saved searches within /etc/apps.

 

 

$SPLUNK_HOME/etc/system/local/authentication.conf
$SPLUNK_HOME/etc/system/local/authorize.conf
$SPLUNK_HOME/etc/users/*

Everything in local folders and local.meta files under splunkhome/etc/apps 

 

  1. Take a complete backup of /opt/splunk and restore it to the standby Search Head as needed.

  2. Implement an rsync script

    on each search head I'd create an environment variable that indicates if the search head is currently the active search head for production workloads. If yes, push that SH's configurations / changes to the standby Search Head.

  3. Implement Search Head Clustering.

    While I believe this is the intended solution for what I need to achieve. I'd like to avoid this if possible as I'm our only Splunk administrator and from what I've been told there's a fair bit more management overhead to tackle.

  4. Implement a CI / CD pipeline that checks user knowledge artifacts into git as they change and then pushes the data out to the environment as needed

    I think this would be the ultimate goal for me as we're working to eliminate as much toil and as many landmines in our environment as possible. Are there any good blog posts or guides concerning managing and deploying via CI / CD in regards to Splunk?

  5. Create an EFS mount on both Search Heads and point the directories that could contain Splunk User Data to the EFS mount for both Search Heads to share.

 

 

References

Spoiler

 

Labels (1)
0 Karma

burwell
SplunkTrust
SplunkTrust

I think search head clustering is the solution they you want. It allows you to share knowledge objects amongst the heads. And if you have four or more you can lose a head without undue pain.

After you have it setup, the day to day administering should take less time than monitoring each individual head. You could use a monitoring console to monitor the SHC. You could use it to setup all kinds of alerts.

0 Karma

isoutamo
SplunkTrust
SplunkTrust
Hi
As @burwell said the SHC is the best option. Second one is rsync. And in both of those you should implement some kind of backup solution (e.g. with git) to keep users changes on safe place even there will be any incidents on production nodes.
r. Ismo
0 Karma
Get Updates on the Splunk Community!

Index This | I am a number, but when you add ‘G’ to me, I go away. What number am I?

March 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

What’s New in Splunk App for PCI Compliance 5.3.1?

The Splunk App for PCI Compliance allows customers to extend the power of their existing Splunk solution with ...

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...