Deployment Architecture

How to install AWS Splunk App on clustered environment?

dwithers
Explorer

Curious on the instructions to Deploy the AWS Splunk App in a clustered environment? We have 1 Master, 1 Searchhead, 2 Indexers, 2 forwarders. I dont think i missed it, but I did not see best practice on deploying this way. THanks.

nkwong_splunk
Splunk Employee
Splunk Employee

Here are the latest instructions on how to deploy the Splunk App for AWS version 4.0 (https://splunkbase.splunk.com/app/1274/#/overview) in a distributed Splunk environment.

http://docs.splunk.com/Documentation/AWS/4.0.0/Installation/Installon-prem#Install_on_a_distributed_...

0 Karma

khourihan_splun
Splunk Employee
Splunk Employee

Installing AWS 2.0 app on A UF (distributed deployment)

Process:

  1. Enable Cloud Trail on your Account and send notifications to a SNS (News) Topic
  2. Setup and SQS to pull messages out of your SNS Topic.
  3. Configure Splunk AWS Cloudtrail input
  4. (optionally) configure AWS Billing section

In AWS:

  1. Enable CloudTrail logging in your AWS account/region

https://portal.aws.amazon.com/gp/aws/developer/registration/index.html?

http://docs.aws.amazon.com/awscloudtrail/latest/userguide/create_trail_using_the_console.html

  1. Enable notifications of your logs to an SNS topic
    http://docs.aws.amazon.com/awscloudtrail/latest/userguide/getting_notifications_configuration.html

    1. Goto the SNS Console:

You should see your Cloudtrail Topic which was create from previous set.

Capture the ARN, you will need this later, when you make a subscription.

Create a Topic:

https://console.aws.amazon.com/sns/home?region=us-east-1#

arn:aws:sns:us-east-1:blah:cloudtrail2splunk

1. Create an SQS queue to be used to watch for SNS notification of CloudTrail logs (i.e. cloudtrail2splunk above)
2. Goto the SQS Console and and make a new Queue:

Enable the ARN field and capture it.

Mine is: arn:aws:sqs:us-east-1:blah:splunk2cloudtrail

From https://console.aws.amazon.com/sqs/home?region=us-east-1#queue-browser:selected=https://sqs.us-east-...

Goto Queue Action and add Permission as shown below:

  1. Subscribe that queue to the SNS topic

To receive messages published to a topic, you have to subscribe an endpoint to that topic. An endpoint is a web server, an email address, or an Amazon SQS queue that can receive notification messages from Amazon SNS. Once you subscribe an endpoint to a topic and the subscription is confirmed, the endpoint will receive all messages published to that topic.

In this section you subscribe an endpoint to the topic you just created in the previous section. You configure the subscription to send the topic messages to your email account.
To subscribe to a topic
1. In the AWS Management Console, click My Subscriptions in the Navigation pane.
The My Subscriptions page opens.
2. Click the Create New Subscription button. The Subscribe dialog box appears:

Note that a single queue in any region may be subscribed to multiple topics from many regions,
if that configuration is desirable.

Installing on a Splunk Forwarder:

1. Copy the app to /tmp
2. As Splunk user untar it: 

[splunk@ernie tmp]$ tar zxf splunk-app-for-aws_20.tgz

3. Move the app to $SPLUNK_HOME/etc/apps
[splunk@ernie tmp]$ mv SplunkAppforAWS ~splunk/etc/apps/

4. Under "Settings, Data Inputs", create a new AWS CloudTrail Log input.  Note if installing on a forwarder skip to step xxx. 
5. Enter your AWS ID, Secret key, the region of your SQS queue, and the queue name

Access Key: blah
Secret Access Key: blah

6. For Forwarder config, create an inputs.conf file in etc/apps/SplunkAppforAWS/local:

    $ more inputs.conf 
    [aws-cloudtrail://SplunkCloudTrail]
    exclude_describe_events = 1
    host = ernie_forwarder
    index = aws-cloudtrail
    interval = 1
    key_id = <ACCOUNT_KEY>
    remove_files_when_done = 0
    secret_key = <SECRET-KEY>
    sqs_queue = splunk2cloudtrail
    sqs_queue_region = us-east-1
    #_tzhint=GMT

7. (Optional) You can put the billing info in there if you want.   Note you'll have to setup Central billing first but the file in local too (see other document):

    $ vi aws.conf
    # all this three stanzas are required, in order to run AWS App.

    [keys]

    # At least one entry required for this stanza

    # Format :
    # <accountno> = <company/group name without space> <aws-access-key> <aws-secret-key> <monthly spend limit for this account>

    #1122334455 = mycompany-name AAAAAAAAAAAAAAAAAAAA +++++BBBBBBBBBBBBBBBBBB/BBBB   10000
    6162xxxxxxxx = freesoft <ACCESSKEY> <SECRET-KEY> 20
    431xxxxxxxxx = splunk SECRETBLAH <SECRET-KEY> 20

    [regions]

    # At least one entry required for this stanza

    rgn1 = eu-west-1
    rgn2 = sa-east-1
    rgn3 = us-east-1
    rgn4 = ap-northeast-1
    rgn5 = us-west-2
    rgn6 = us-west-1
    rgn7 = ap-southeast-1
    rgn8 = ap-southeast-2

    [misc]

    # Format :

    # corpkey = <company name/corp account name without space> <aws-access-key> <aws-secret-key>

    corpkey = freesoft <ACCESS-KEY> <SECRET-KEY>

    # acno is corp account number for AWS
    acno = 616xxxxxxxxx
    # s3bucket is bucket name where AWS bill csv files will be dropped
    s3bucket = aws_app

daryl_graham
Engager

I am in a similar situation. Deploying the app via the master to the indexers does not appear to work due to hard-coded paths inside the app. The searches in the billing section also appear to use a local file (that would not be generated on the search heads if the input scripts are running on the indexer/s).

0 Karma
Get Updates on the Splunk Community!

Why am I not seeing the finding in Splunk Enterprise Security Analyst Queue?

(This is the first of a series of 2 blogs). Splunk Enterprise Security is a fantastic tool that offers robust ...

Index This | What are the 12 Days of Splunk-mas?

December 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

Get Inspired! We’ve Got Validation that Your Hard Work is Paying Off

We love our Splunk Community and want you to feel inspired by all your hard work! Eric Fusilero, our VP of ...