All Apps and Add-ons

Unable to get AWS Cloudtrail logs via SQS based S3 input

ccisneroslsq
New Member

We use the Splunk Add-on for AWS and have multiple accounts that send their Cloudtrail logs to an S3 bucket in a specific account. The logs in the bucket are encrypted with a KMS key. Each account has a Splunk user with the required S3, SQS and KMS permissions, the S3 bucket has a bucket policy allowing the users from each account full access to the bucket.

We have another SQS based S3 input from an account which sends it's Cloudtrail to an S3 bucket in the same account and logs are not encrypted which works fine.

When we look at the _internal logs for the inputs which are not working, we get bombarded with the following messages:

2020-05-14 14:50:34,692 level=CRITICAL pid=15774 tid=Thread-6 logger=splunk_ta_aws.modinputs.sqs_based_s3.handler pos=handler.py:_process:268 | start_time=1589314358 datainput="Stage-Cloudtrail", ttl=30 message_id="22ca88b4-3bc9-4931-9154-0ac84f80a062" created=1589467834.66 job_id=442dd56d-e988-4a4a-ac71-6eafbe24bf3d | message="An error occurred while processing the message." 
Traceback (most recent call last):
  File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws/modinputs/sqs_based_s3/handler.py", line 256, in _process
    headers = self._download(record, cache, session)
  File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws/modinputs/sqs_based_s3/handler.py", line 290, in _download
    return self._s3_agent.download(record, cache, session)
  File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws/modinputs/sqs_based_s3/handler.py", line 418, in download
    return bucket.transfer(s3, key, fileobj, **condition)
  File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws/common/s3.py", line 73, in transfer
    headers = client.head_object(Bucket=bucket, Key=key, **kwargs)
  File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/3rdparty/botocore/client.py", line 272, in _api_call
    return self._make_api_call(operation_name, kwargs)
  File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/3rdparty/botocore/client.py", line 576, in _make_api_call
    raise error_class(parsed_response, operation_name)
ClientError: An error occurred (403) when calling the HeadObject operation: Forbidden

We do not assume role with the Splunk users, they have the policy below applied:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "splunk",
            "Effect": "Allow",
            "Action": [
                "sts:AssumeRole",
                "sqs:SendMessage",
                "sqs:ReceiveMessage",
                "sqs:ListQueues",
                "sqs:GetQueueUrl",
                "sqs:GetQueueAttributes",
                "sqs:DeleteMessage",
                "sns:Publish",
                "sns:List*",
                "sns:Get*",
                "s3:*",
                "s3:ListBucket",
                "s3:ListAllMyBuckets",
                "s3:GetObject",
                "s3:GetLifecycleConfiguration",
                "s3:GetBucketTagging",
                "s3:GetBucketLogging",
                "s3:GetBucketLocation",
                "s3:GetBucketCORS",
                "s3:GetAccelerateConfiguration",
                "rds:DescribeDBInstances",
                "logs:GetLogEvents",
                "logs:DescribeLogStreams",
                "logs:DescribeLogGroups",
                "lambda:ListFunctions",
                "kms:Decrypt",
                "kinesis:ListStreams",
                "kinesis:Get*",
                "kinesis:DescribeStream",
                "inspector:List*",
                "inspector:Describe*",
                "iam:ListUsers",
                "iam:ListAccessKeys",
                "iam:GetUser",
                "iam:GetAccountPasswordPolicy",
                "iam:GetAccessKeyLastUsed",
                "elasticloadbalancing:DescribeTargetHealth",
                "elasticloadbalancing:DescribeTargetGroups",
                "elasticloadbalancing:DescribeTags",
                "elasticloadbalancing:DescribeLoadBalancers",
                "elasticloadbalancing:DescribeListeners",
                "elasticloadbalancing:DescribeInstanceHealth",
                "ec2:DescribeVpcs",
                "ec2:DescribeVolumes",
                "ec2:DescribeSubnets",
                "ec2:DescribeSnapshots",
                "ec2:DescribeSecurityGroups",
                "ec2:DescribeReservedInstances",
                "ec2:DescribeRegions",
                "ec2:DescribeNetworkAcls",
                "ec2:DescribeKeyPairs",
                "ec2:DescribeInstances",
                "ec2:DescribeImages",
                "ec2:DescribeAddresses",
                "config:GetComplianceSummaryByConfigRule",
                "config:GetComplianceDetailsByConfigRule",
                "config:DescribeConfigRules",
                "config:DescribeConfigRuleEvaluationStatus",
                "config:DeliverConfigSnapshot",
                "cloudwatch:List*",
                "cloudwatch:Get*",
                "cloudwatch:Describe*",
                "cloudfront:ListDistributions",
                "autoscaling:Describe*"
            ],
            "Resource": "*"
        }
    ]
}

At this point we have even tried getting the logs in via Generic S3 and Cloudtrail input types, none of them work.

0 Karma

Jiglo
Engager

Were you able to get this working? If so, how?

0 Karma
Career Survey
First 500 qualified respondents will receive a $20 gift card! Tell us about your professional Splunk journey.
Get Updates on the Splunk Community!

.conf25 Global Broadcast: Don’t Miss a Moment

Hello Splunkers, .conf25 is only a click away.  Not able to make it to .conf25 in person? No worries, you can ...

Observe and Secure All Apps with Splunk

 Join Us for Our Next Tech Talk: Observe and Secure All Apps with SplunkAs organizations continue to innovate ...

What's New in Splunk Observability - August 2025

What's New We are excited to announce the latest enhancements to Splunk Observability Cloud as well as what is ...