Deployment Architecture

How to create and share users/roles between search heads on Splunk 6.2 Search Head Clustering deployment?

fabiocaldas
Contributor

How can I create user/roles to be shared between Search Heads on a Splunk 6.2 Clustering deployment?

1 Solution

jbeyers_splunk
Splunk Employee
Splunk Employee

Hi,

Search head clustering does not support splunk-auth user sharing in Splunk 6.2. You could create users & roles on each search head cluster member separately, but it's recommended to use centralized authentication (ex. LDAP) on the members to better manage this, especially for larger search head clusters.

-Jason

View solution in original post

jnicholsenernoc
Path Finder

We aren't using LDAP and don't have any plans to.

Instead of rsync between servers, we use s3 as we are running on aws. Not the prettiest solution, but it gets the job done by checking if the file from s3 is newer or older than the local file and manages it appropriately. With this users can be added on any search head and password updates can occur anywhere. The rest of the users data seems to be shared correctly through search head clustering.

The below script will query s3 for the passwd file in s3://splunk-configs/splunk-users/etc/passwd
The last modified timestamp is compared against the timestamp on the local file /opt/splunk/etc/passwd

There are three outcomes, the local file is newer, older, or the exact same age as what is in s3.
1) If the timestamps "match" the script does nothing and quietly exits.

2) If the local file is older than the s3 file, it downloads the file overwriting the local copy. It then triggers a splunk restart. If run interactively, the username and pw can then be supplied, otherwise it will fail. It then touches the local file with the exact timestamp of the s3 file so the timestamps "match" and the script does nothing on future runs.

3) If the local file is newer than the s3 file, it pushes the local file to s3 and overwrites what is there. It queries s3 repeatedly with a 10 second sleep until it sees the timestamp update, as it can take time for s3 to become consistent. It then uses this timestamp to touch the local password file again, so the timestamps "match" and the script does nothing on future runs.

The script is added to a search head, for local execution (non clustered) with these commands. We didn't use the deployer because we want it to run on each node locally at all times. Contents of script are below.

ln -s /opt/splunk/releases/user-sync.pl /opt/splunk/etc/apps/search/bin/user-sync.pl
/opt/splunk/bin/splunk add exec -source /opt/splunk/etc/apps/search/bin/user-sync.pl -interval 600

This will cause the sync to occur ever 10 minutes, enough for our use case. Your usage might differ.

The only known issue is if two users update their password during a sync run. One change could overwrite the other. We have an alert for this and look at each password update that gets made.

Users can be created on any search head. The user directories appear to be created and replicated as part of SHC.

Using the scripted input approach, the results can be viewed with the following search: index=main source="/opt/splunk/etc/apps/search/bin/user-sync.pl"

HTH, if anyone has a better script we'd be happy to use it so post it. No plans for LDAP.

#!/usr/bin/perl -w
use Time::Local;
#This script pulls down a copy of the passwd file for the domain from s3
# It will compare it with the local copy
# If the current local copy is newer, it pushes it
# If the current local copy is older, it pulls down the newest version (and should bounce splunk)

# This variable indicates where to pull down the configs from
my $s3_bucket = "splunk-configs";

# Path inside the bucket to the passwd file
my $path = "/splunk-users/etc";

my $path_to_local_file = "/opt/splunk/etc/passwd";

use POSIX qw(strftime);
my $now_string = strftime "%m%d%H%M%S", localtime;

my $dryrun = "";
if(defined($ARGV[0])) {
        if($ARGV[0] eq "--dryrun" ) {
                $dryrun = "--dryrun";
        }
}

my $s3_options = $dryrun . " ";

my $s3_cmd = "aws s3 ls s3://$s3_bucket$path/passwd";
my $timestamp = `$s3_cmd | awk '{print \$1 " " \$2}'`;
my ($year,$mon,$mday,$hour,$min,$sec) = split(/[-\s.:]+/, $timestamp);
my $s3_mod_time = timelocal($sec,$min,$hour,$mday,$mon-1,$year);

open LOCALFILE, "$path_to_local_file" or die $!;
my $local_mod_time = (stat(LOCALFILE))[9];

if($s3_mod_time == $local_mod_time) {
        print "Files match. Doing nothing.\n";
} elsif($s3_mod_time > $local_mod_time) {
        print "Local file is older thatn the s3 file.  Download from s3 and overwriting local passwd\n";
        system("aws s3 cp s3://$s3_bucket$path/passwd /opt/splunk/etc/passwd");
        print "Triggering splunk restart\n";
        system("/opt/splunk/bin/splunk _internal call /authentication/providers/services/_reload -auth adminuser:yoursplunkpw");
        system("touch --date \"$year-$mon-$mday $hour:$min:$sec\" $path_to_local_file");
} else {
        print "Local file is newer than the s3 file.  Pushing from local to s3.\n";
        system("aws s3 cp $path_to_local_file s3://$s3_bucket$path/passwd");

        print "Touching the local file with the s3 timestamp...\n";

        $new_s3_mod_time = $s3_mod_time;

        while($new_s3_mod_time == $s3_mod_time) {
                sleep(10);
                $timestamp = `$s3_cmd | awk '{print \$1 " " \$2}'`;
                ($year,$mon,$mday,$hour,$min,$sec) = split(/[-\s.:]+/, $timestamp);
                $new_s3_mod_time = timelocal($sec,$min,$hour,$mday,$mon-1,$year);
        }

        ($year,$mon,$mday,$hour,$min,$sec) = split(/[-\s.:]+/, $timestamp);
        system("touch --date \"$year-$mon-$mday $hour:$min:$sec\" $path_to_local_file");
}

steven_swor
Path Finder

Also, I assume your search head cluster uses a shared secret? (e.g. $SPLUNK_HOME/etc/auth/splunk.secret) If not, how did you manage to solve the problem of password hashing?

0 Karma

steven_swor
Path Finder

Wow, it sounds like you guys went to a lot of trouble to avoid using LDAP. It's a bit off-topic, but I'm curious to know why. (not that I'm trying to argue in favor of ldap, just curious).

0 Karma

changux
Builder

One "dirty option": you can always use tools like rsync to sync files between servers 🙂

0 Karma

jbeyers_splunk
Splunk Employee
Splunk Employee

Hi,

Search head clustering does not support splunk-auth user sharing in Splunk 6.2. You could create users & roles on each search head cluster member separately, but it's recommended to use centralized authentication (ex. LDAP) on the members to better manage this, especially for larger search head clusters.

-Jason

jnhth
Explorer

But I still have to create Roles on each Serch Head in the cluster - right? And then I can use LDAP for authentication..

fabiocaldas
Contributor

Hi jbeyers_splunk sorry take too long, I was really just waiting for a miracle because I couldn't beleive I will have to use LDAP. But ok, I'm fine again. Thanks for your anwser.

0 Karma
Get Updates on the Splunk Community!

Cloud Platform & Enterprise: Classic Dashboard Export Feature Deprecation

As of Splunk Cloud Platform 9.3.2408 and Splunk Enterprise 9.4, classic dashboard export features are now ...

Explore the Latest Educational Offerings from Splunk (November Releases)

At Splunk Education, we are committed to providing a robust learning experience for all users, regardless of ...

New This Month in Splunk Observability Cloud - Metrics Usage Analytics, Enhanced K8s ...

The latest enhancements across the Splunk Observability portfolio deliver greater flexibility, better data and ...