All Apps and Add-ons

Automating the installation of the Splunk Add-on for Amazon Web Services, how do we encrypt the password in passwords.conf?

michaeloleary
Path Finder

Hey Folks,

I've come across an unusual problem while trying automate the installation of the Splunk Add-on for Amazon Web Services. We are currently using ansible pull to execute some scripts which in turn creates a customized copy of the /opt/splunk/etc/apps/Splunk_TA_aws/local/passwords.conf file. For this we retrieve the credentials via a credstash lookup. This we can do, but restarting the Splunk binary does not encrypt the passwords.conf password values in the Splunk_TA_aws . So we ended up with something like the following:
**
[credential:testCreds:AKIA.....23A:]
password = zZ+U..................7HaOS
**
instead of something like this:
**
[credential:testCreds:AKIA.....23A:]
password = $1$B8Ip...........TmHnGo=
**

Note the $1$ indicating the hash. Security compliance within the organization requires that the secret key be encrypted at rest. However, the only way I've found to hash the password in the passwords.conf file is via the UI by clicking "Configuration" > "Actions" > "Edit" and filling in the secret key then clicking on "Update" within the Splunk_TA_aws. While I can automate this via Selenium Web driver, this adds an additional layer of complexity for an organization that is doing a proof of concept with Splunk and doesn't use Selenium. Is there a Splunk command line tool supplied from the AWS TA that we can execute a shell command to inject the hash into passwords.conf?

Regards
Michael

0 Karma
1 Solution

nvonkorff
Path Finder

I've been bashing my head against this for a few days now and I think I have found the answer. Thanks to Jeremiah's previous response, pointing me to hunt for the right REST endpoint.

AWS Credentials:

curl -k -u admin:changeme https://localhost:8089/servicesNS/nobody/Splunk_TA_aws/storage/passwords -d name=Cr4zy4cc355k3y -d password=Cr4zyS3cr3tK3y -d realm=SplunkAWS -d title=SplunkAWS:Cr4zy4cc355k3y:

Proxy config (if required):

curl -k -u admin:changeme https://localhost:8089/servicesNS/nobody/Splunk_TA_aws/storage/passwords -d name=default -d password=:@proxy.server.address.com:3128 -d realm=_aws_proxy -d title=_aws_proxy:default:

View solution in original post

Jeremiah
Motivator

Here's an update since this post is over a year old: I'd recommend to anyone looking to automate configuration of the AWS app to check out the scripts in Splunk_TA_aws/bin/tools/configure. Splunk now provides a python script to add accounts, roles, and inputs to the AWS app without needing to use the UI (it makes the REST calls for you, basically). It supports storing credentials as well as using instance profiles and roles. I was able to create an account entry using an instance profile, add an assumed role, and create an input all via cli.

bhavesh91
New Member

Jeremiah,

We will check on this and reach out to you if I have issues/questions.

0 Karma

jplumsdaine22
Influencer

amazing find!

0 Karma

michaeloleary
Path Finder

Thanks Jeremiah, an API called was also on my list of things to automate, Sorry I didn't response sooner as I didn't realise you posted at reply.

0 Karma

nvonkorff
Path Finder

I've been bashing my head against this for a few days now and I think I have found the answer. Thanks to Jeremiah's previous response, pointing me to hunt for the right REST endpoint.

AWS Credentials:

curl -k -u admin:changeme https://localhost:8089/servicesNS/nobody/Splunk_TA_aws/storage/passwords -d name=Cr4zy4cc355k3y -d password=Cr4zyS3cr3tK3y -d realm=SplunkAWS -d title=SplunkAWS:Cr4zy4cc355k3y:

Proxy config (if required):

curl -k -u admin:changeme https://localhost:8089/servicesNS/nobody/Splunk_TA_aws/storage/passwords -d name=default -d password=:@proxy.server.address.com:3128 -d realm=_aws_proxy -d title=_aws_proxy:default:

rasikmhetre
Explorer

For AWS credentials if you have "+" in your secret key this api call won't work.

The solution which I did is to create a new secret key that doesn't have a '+' special character in it.

0 Karma

Jeremiah
Motivator

michaeloleary
Path Finder

Cheers Jeremiah, looks like I missed this in the docs the first time around.

0 Karma

bhavesh91
New Member

Hi Nvonkorff/Jeremiah,

We are trying to automate the enabling of proxy config for the Splunk AWS addon , the above curl command that you provided with that we are able to get the passwords.conf but on the UI the proxy doesn't show as Enabled and until we manually go and check the Box to Enable it doesn't enabled until , also I compared the passwords.conf post running the curl and also when we manually check the Box for Enable Proxy - those both are different encrypted files - please let us know if its even possible to automate the proxy enablement or not?

0 Karma

sloshburch
Splunk Employee
Splunk Employee

Couldn't you do the proxy part with the Deployment Server as part of the push of the app itself? Then use the REST api for the credential part?

0 Karma

bhavesh91
New Member

Hi Burch,

We have an EC2 instance with Splunk enterprise(acts as a Heavy Forwarder) created with autoscaling enabled and hence we are looking for automating the configurations if in case we need to have fault tolerance , that is why we aren't using the Deployment server as it defeats purpose of automatically enabling the proxy with the inputs enabled in it.
Hence we are trying to figure if there is someway to get this configured - any help here will be great.

As we are using the IAM InstanceProfile associated with the EC2 instance , also we don't have the secret and access key granted since we aren't using IAM user.

Thanks,
Bhavesh

0 Karma

sloshburch
Splunk Employee
Splunk Employee

This might be out of my domain of expertise, but I didn't follow this part "Deployment server as it defeats purpose of automatically enabling the proxy with the inputs enabled in it."

I would imagine that when the new EC2 instance is instantiated it would simply get it's config (including proxy) from the DS.

0 Karma

bhavesh91
New Member

Sorry - I should have interpreted it correctly , I have to check on the Deployment Server option , the only question on the deployment server that I have is how will it check if there is a new EC2 instance that has come up - does it have a phonehome option such that the deployment server automatically detects it and push the appropriate configs there , so would you mind helping with the links to the docs on Deployment server and should the deployment server be on the AWS as well ?

0 Karma

sloshburch
Splunk Employee
Splunk Employee

Oh boy! Sounds like you're missing out on a lot of great functionality!

Yea, the way you described the Deployment Server is it's core functionality just the communication is flipped. When clients are told of the Deployment Server, they communicate to it and the DS and download the configurations assigned to them as per the DS server classes.

I would highly recommend getting comfortable with the DS option because (1) it sounds like you are putting in a lot of work to build a solution that the DS already does for you; (2) once you are familiar with it, I'm confident you'll see that the DS will benefit many other aspects of your environment besides this one HF discussion.

Deployment Server documentation is at http://docs.splunk.com/Documentation/Splunk/latest/Updating

0 Karma

bhavesh91
New Member

So the Deployment server should also be on AWS itself right to have the communication , yeah we were using the DS long time back now we used a slightly different approach , looks like we will need to go back to the roots : ) , I will start looking into the DS and reach out to you if I get stuck .

Thanks Burch

0 Karma

sloshburch
Splunk Employee
Splunk Employee

Where the DS lives is really a factor of what communication your environment allows. Worst case, you might have more than one Deployment Server if you don't allow communication. Best case is you have the one. I can't really get into it too much more without knowing all the details of the environment but I'm confident your account team could work through that with you.

0 Karma

bhavesh91
New Member

Hi Burch,

I gave it a try with the DS that I set up on the AWS , the config push with the Addon for AWS by enabling the proxy with the host and port information in the UI and then taking that config and push it via the Deployment Server to the standalone Heavyforwarder , the inputs for the Kinesis showed up but the Proxy enable option did not work . So I read another blog where in there is a mention of credentials management not being handled by DS, its for multiple forwarders , but for individual/standalone instance also it doesn't seem to handle it : https://answers.splunk.com/answers/319320/how-to-add-an-aws-account-to-the-splunk-add-on-for.html .
Don't use a deployment server to deploy a configured add-on across multiple forwarders. The deployment server is not compatible with credential management or with deploying configured modular inputs to multiple nodes (which results in duplicate data collection.)

0 Karma

sloshburch
Splunk Employee
Splunk Employee

Hi @bhavesh91 - Remember that answers threads are not blog posts. Sometimes answers are from other customers whereas Blog posts are written by Splunk employees.

I would assume the problem with sharing credentials is related to the use of different splunk.secret files. Such files are used in calculating the password hash used by the instance. In other words, if you have the same splunk.secret file on the HF then they would hash the same way.

Alternatively, it is certainly possible that the AWS app handles proxy details in a different fashion and therefore this is an exception to that solution and may require manual intervention for now. If that is the case, I would suggest creating an Enhancement Request (P4 support ticket) if you don't already have one created.

0 Karma

sloshburch
Splunk Employee
Splunk Employee

Interesting. In this design, do you have potentially have multiple Heavy Forwarders running or is it restricted to just the one? I ask because if multiple instances are instantiated, I'm curious how you ensure the modular inputs keep checkpoints instead of re-indexing data.

0 Karma

bhavesh91
New Member

Hi Burch,

Nope we have only 1 Heavy Forwarder running and its auto-scaled to maximum & minimum as 1 for the fault tolerance. We don't have multiple HF just to avoid reindexing of the data/duplication.

0 Karma
Get Updates on the Splunk Community!

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...

What's new in Splunk Cloud Platform 9.1.2312?

Hi Splunky people! We are excited to share the newest updates in Splunk Cloud Platform 9.1.2312! Analysts can ...

What’s New in Splunk Security Essentials 3.8.0?

Splunk Security Essentials (SSE) is an app that can amplify the power of your existing Splunk Cloud Platform, ...