All Apps and Add-ons

Unable to get AWS Cloudtrail logs via SQS based S3 input

ccisneroslsq
New Member

We use the Splunk Add-on for AWS and have multiple accounts that send their Cloudtrail logs to an S3 bucket in a specific account. The logs in the bucket are encrypted with a KMS key. Each account has a Splunk user with the required S3, SQS and KMS permissions, the S3 bucket has a bucket policy allowing the users from each account full access to the bucket.

We have another SQS based S3 input from an account which sends it's Cloudtrail to an S3 bucket in the same account and logs are not encrypted which works fine.

When we look at the _internal logs for the inputs which are not working, we get bombarded with the following messages:

2020-05-14 14:50:34,692 level=CRITICAL pid=15774 tid=Thread-6 logger=splunk_ta_aws.modinputs.sqs_based_s3.handler pos=handler.py:_process:268 | start_time=1589314358 datainput="Stage-Cloudtrail", ttl=30 message_id="22ca88b4-3bc9-4931-9154-0ac84f80a062" created=1589467834.66 job_id=442dd56d-e988-4a4a-ac71-6eafbe24bf3d | message="An error occurred while processing the message." 
Traceback (most recent call last):
  File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws/modinputs/sqs_based_s3/handler.py", line 256, in _process
    headers = self._download(record, cache, session)
  File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws/modinputs/sqs_based_s3/handler.py", line 290, in _download
    return self._s3_agent.download(record, cache, session)
  File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws/modinputs/sqs_based_s3/handler.py", line 418, in download
    return bucket.transfer(s3, key, fileobj, **condition)
  File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws/common/s3.py", line 73, in transfer
    headers = client.head_object(Bucket=bucket, Key=key, **kwargs)
  File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/3rdparty/botocore/client.py", line 272, in _api_call
    return self._make_api_call(operation_name, kwargs)
  File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/3rdparty/botocore/client.py", line 576, in _make_api_call
    raise error_class(parsed_response, operation_name)
ClientError: An error occurred (403) when calling the HeadObject operation: Forbidden

We do not assume role with the Splunk users, they have the policy below applied:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "splunk",
            "Effect": "Allow",
            "Action": [
                "sts:AssumeRole",
                "sqs:SendMessage",
                "sqs:ReceiveMessage",
                "sqs:ListQueues",
                "sqs:GetQueueUrl",
                "sqs:GetQueueAttributes",
                "sqs:DeleteMessage",
                "sns:Publish",
                "sns:List*",
                "sns:Get*",
                "s3:*",
                "s3:ListBucket",
                "s3:ListAllMyBuckets",
                "s3:GetObject",
                "s3:GetLifecycleConfiguration",
                "s3:GetBucketTagging",
                "s3:GetBucketLogging",
                "s3:GetBucketLocation",
                "s3:GetBucketCORS",
                "s3:GetAccelerateConfiguration",
                "rds:DescribeDBInstances",
                "logs:GetLogEvents",
                "logs:DescribeLogStreams",
                "logs:DescribeLogGroups",
                "lambda:ListFunctions",
                "kms:Decrypt",
                "kinesis:ListStreams",
                "kinesis:Get*",
                "kinesis:DescribeStream",
                "inspector:List*",
                "inspector:Describe*",
                "iam:ListUsers",
                "iam:ListAccessKeys",
                "iam:GetUser",
                "iam:GetAccountPasswordPolicy",
                "iam:GetAccessKeyLastUsed",
                "elasticloadbalancing:DescribeTargetHealth",
                "elasticloadbalancing:DescribeTargetGroups",
                "elasticloadbalancing:DescribeTags",
                "elasticloadbalancing:DescribeLoadBalancers",
                "elasticloadbalancing:DescribeListeners",
                "elasticloadbalancing:DescribeInstanceHealth",
                "ec2:DescribeVpcs",
                "ec2:DescribeVolumes",
                "ec2:DescribeSubnets",
                "ec2:DescribeSnapshots",
                "ec2:DescribeSecurityGroups",
                "ec2:DescribeReservedInstances",
                "ec2:DescribeRegions",
                "ec2:DescribeNetworkAcls",
                "ec2:DescribeKeyPairs",
                "ec2:DescribeInstances",
                "ec2:DescribeImages",
                "ec2:DescribeAddresses",
                "config:GetComplianceSummaryByConfigRule",
                "config:GetComplianceDetailsByConfigRule",
                "config:DescribeConfigRules",
                "config:DescribeConfigRuleEvaluationStatus",
                "config:DeliverConfigSnapshot",
                "cloudwatch:List*",
                "cloudwatch:Get*",
                "cloudwatch:Describe*",
                "cloudfront:ListDistributions",
                "autoscaling:Describe*"
            ],
            "Resource": "*"
        }
    ]
}

At this point we have even tried getting the logs in via Generic S3 and Cloudtrail input types, none of them work.

0 Karma

Jiglo
Engager

Were you able to get this working? If so, how?

0 Karma
Get Updates on the Splunk Community!

Index This | I am a number, but when you add ‘G’ to me, I go away. What number am I?

March 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

What’s New in Splunk App for PCI Compliance 5.3.1?

The Splunk App for PCI Compliance allows customers to extend the power of their existing Splunk solution with ...

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...