All Apps and Add-ons

Splunk App for AWS: Why am I getting "Unexpected error "" from python handler: "S3ResponseError: 403 Forbidden" trying to connect o S3 buckets?

Explorer

I am trying to connect to S3 buckets for logging and billing purposes.

Unexpected error "" from python handler: "S3ResponseError: 403 Forbidden"

is appearing after a successful Get_buckets call and it appears the app is attempting to list keys.

I even am using the same account that created the bucket. I can list keys of a few other buckets (of which are no interest), however, I have attempted ntpudate on the machine, granted full rights to the account, added an inline policy, added bucket policies for /* and . I have even written a simple script using boto to connect and list the bucket contents. These methods all fail.

AWS S3 CLI is fine however.

0 Karma
1 Solution

Explorer

I have solved the issue. If you have an isolated VPC or the EC2 instances that you have are in a subnet with no IG you must place a VPC endpoint policy. This referenced link helped:

http://docs.aws.amazon.com/AmazonS3/latest/dev/example-bucket-policies-vpc-endpoint.html

How To:

  1. Login to AWS Console.
  2. Click on VPC.
  3. Select Endpoints. 3a. (If none exists create one).
  4. Then add a policy like below for the EC2 instances within the VPC to permit access.

    {
        "Sid": "Allow-Access-To-Splunk-Buckets",
        "Effect": "Allow",
        "Principal": "*",
        "Action": "s3:*",
        "Resource": [
            "arn:aws:s3:::mys3bucketthatsplunkneedstoread/*",
            "arn:aws:s3:::mys3bucketthatsplunkneedstoread"
        ]
    }
    

View solution in original post

0 Karma

Explorer

I have solved the issue. If you have an isolated VPC or the EC2 instances that you have are in a subnet with no IG you must place a VPC endpoint policy. This referenced link helped:

http://docs.aws.amazon.com/AmazonS3/latest/dev/example-bucket-policies-vpc-endpoint.html

How To:

  1. Login to AWS Console.
  2. Click on VPC.
  3. Select Endpoints. 3a. (If none exists create one).
  4. Then add a policy like below for the EC2 instances within the VPC to permit access.

    {
        "Sid": "Allow-Access-To-Splunk-Buckets",
        "Effect": "Allow",
        "Principal": "*",
        "Action": "s3:*",
        "Resource": [
            "arn:aws:s3:::mys3bucketthatsplunkneedstoread/*",
            "arn:aws:s3:::mys3bucketthatsplunkneedstoread"
        ]
    }
    

View solution in original post

0 Karma

Splunk Employee
Splunk Employee

It seems you don't have permission to access to this bucket, or the credential is invalid.

0 Karma

Explorer

Ironically the account that created it is the same that is accessing it. Any possible advice on permissions verification? Again CLI works fine for the account where boto behaviors is different.

0 Karma

Splunk Employee
Splunk Employee

Could you post your IAM settings? We will try to reproduce the issue. Meanwhile, please check if your account has capabilities as listed in: http://docs.splunk.com/Documentation/AWS/4.1.1/Installation/ConfigureyourAWSpermissions

0 Karma

Explorer

I think I need to see how boto creates the GET request and verify that it meets AWS standards. I will research this...

The CORS is as follows:

<CORSRule>
    <AllowedOrigin>*</AllowedOrigin>
    <AllowedMethod>GET</AllowedMethod>
    <MaxAgeSeconds>3000</MaxAgeSeconds>
    <AllowedHeader>Authorization</AllowedHeader>
</CORSRule>

Here is the IAM. I gave rules for s3:Get* and s3:List*.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "sqs:GetQueueAttributes",
                "sqs:ListQueues",
                "sqs:ReceiveMessage",
                "sqs:GetQueueUrl",
                "sqs:SendMessage",
                "sqs:DeleteMessage",
                "config:DeliverConfigSnapshot",
                "iam:GetUser",
                "autoscaling:Describe*",
                "cloudwatch:Describe*",
                "cloudwatch:Get*",
                "cloudwatch:List*",
                "sns:Get*",
                "sns:List*",
                "s3:Get*",
                "s3:List*",
                "logs:DescribeLogGroups",
                "logs:DescribeLogStreams",
                "logs:GetLogEvents",
                "ec2:DescribeInstances",
                "ec2:DescribeReservedInstances",
                "ec2:DescribeSnapshots",
                "ec2:DescribeRegions",
                "ec2:DescribeKeyPairs",
                "ec2:DescribeNetworkAcls",
                "ec2:DescribeSecurityGroups",
                "ec2:DescribeSubnets",
                "ec2:DescribeVolumes",
                "ec2:DescribeVpcs",
                "rds:DescribeDBInstances",
                "cloudfront:ListDistributions",
                "elasticloadbalancing:DescribeLoadBalancers"
            ],
            "Resource": [
                "*"
            ]
        }
    ]
}
0 Karma

Splunk Employee
Splunk Employee

We copied your policies then tested in our environment but didn't meet such an exception. Have you ever changed IAM user Security Credentials after creating bucket, or assigned some permissions to the bucket in S3's Properties panel from AWS Management Console?

If No, pls try this solution in your script (link text😞
mybucket = conn.get_bucket(BUCKET_NAME, validate=False)

BTW, could you pls list the version of splunk/app/add-on? Thanks a lot.

0 Karma

Explorer

I tried the validate False which makes it get further but the 403 occurs. I might have copied up an older version of the script on the original entry.

I have a ticket open with AWS also. I am working from both angles.

Splunk 6.0.0
AWS App For Splunk 4.1.0

0 Karma

Splunk Employee
Splunk Employee

Hi, where did you see this error? In Add-on or App, or both? Which page did you see this error? Could you upload the error log as well? Thanks

Peter Chen

0 Karma

Explorer

I am receiving the error when I am configuring either the S3 input or Billing input in the splunk app. I can force the configuration in the add-on however I still receive the 403 error.

I am pasting the log below (I changed the s3 bucket name):

2016-02-26 19:27:15,600 INFO pid=14160 tid=Thread-9 file=aws_s3_data_loader.py:_do_index_data:77 | Start processing datainput=foobar:AWSS3:testinput, bucket=somebucketname
2016-02-26 19:27:18,226 ERROR pid=14160 tid=Thread-9 file=aws_s3_data_loader.py:index_data:68 | Failed to collect S3 data from bucket_name=somebucketname, error=Traceback (most recent call last):
  File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/s3_mod/aws_s3_data_loader.py", line 64, in index_data
    self._do_index_data()
  File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/s3_mod/aws_s3_data_loader.py", line 79, in _do_index_data
    self.collect_data()
  File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/s3_mod/aws_s3_data_loader.py", line 85, in collect_data
    conn = s3common.create_s3_connection(self._config)
  File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/s3_mod/aws_s3_common.py", line 168, in create_s3_connection
    config[tac.region] = get_region_for_bucketname(config)
  File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/s3_mod/aws_s3_common.py", line 137, in get_region_for_bucketname
    return try_special_regions(config)
  File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/s3_mod/aws_s3_common.py", line 89, in try_special_regions
    raise last_exception
S3ResponseError: S3ResponseError: 403 Forbidden
2016-02-26 19:40:15,427 INFO pid=14160 tid=Thread-3 file=ta_aws_common.py:reload_and_exit:97 | Conf file(s)=['/opt/splunk/etc/apps/Splunk_TA_aws/local/passwords.conf'] changed, exiting...
2016-02-26 19:40:15,428 INFO pid=14160 tid=Thread-3 file=data_loader_mgr.py:tear_down:95 | DataLoaderManager is going to stop.
2016-02-26 19:40:15,430 INFO pid=14160 tid=MainThread file=data_loader_mgr.py:_wait_for_tear_down:89 | DataLoaderManager got stop signal
2016-02-26 19:40:18,491 INFO pid=14160 tid=MainThread file=data_loader_mgr.py:run:78 | DataLoaderManager stopped.
2016-02-26 19:40:18,491 INFO pid=14160 tid=MainThread file=aws_s3.py:run:132 | End aws_s3
2016-02-26 19:40:18,658 INFO pid=3126 tid=MainThread file=aws_s3.py:run:123 | Start aws_s3
2016-02-26 19:40:18,734 INFO pid=3126 tid=MainThread file=aws_s3_checkpointer.py:convert_legacy_ckpt_to_new_ckpts:273 | Legacy ckpt is not found for stanza=aws_s3://foobar:AWSS3:testinput, bucket_name=somebucketname
2016-02-26 19:40:21,114 INFO pid=3126 tid=MainThread file=aws_s3_checkpointer.py:__init__:338 | Grapped ckpt.lock
2016-02-26 19:40:21,153 INFO pid=3126 tid=MainThread file=aws_s3_checkpointer.py:__exit__:351 | Removed ckpt.lock
2016-02-26 19:40:21,153 DEBUG pid=3126 tid=MainThread file=aws_s3.py:_do_run:106 | Use single_instance=false
2016-02-26 19:40:21,154 DEBUG pid=3126 tid=MainThread file=data_loader_mgr.py:_read_default_settings:146 | settings: {'thread_min_size': 10, 'task_queue_size': 128, 'thread_max_size': 64, 'process_size': 0}
2016-02-26 19:40:21,159 INFO pid=3126 tid=MainThread file=data_loader_mgr.py:run:53 | DataLoaderManager started.
2016-02-26 19:40:21,161 INFO pid=3126 tid=Thread-4 file=aws_s3_data_loader.py:_do_index_data:77 | Start processing datainput=foobar:AWSS3:testinput, bucket=somebucketname
2016-02-26 19:40:23,370 ERROR pid=3126 tid=Thread-4 file=aws_s3_data_loader.py:index_data:68 | Failed to collect S3 data from bucket_name=somebucketname, error=Traceback (most recent call last):
  File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/s3_mod/aws_s3_data_loader.py", line 64, in index_data
    self._do_index_data()
  File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/s3_mod/aws_s3_data_loader.py", line 79, in _do_index_data
    self.collect_data()
  File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/s3_mod/aws_s3_data_loader.py", line 85, in collect_data
    conn = s3common.create_s3_connection(self._config)
  File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/s3_mod/aws_s3_common.py", line 168, in create_s3_connection
    config[tac.region] = get_region_for_bucketname(config)
  File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/s3_mod/aws_s3_common.py", line 137, in get_region_for_bucketname
    return try_special_regions(config)
  File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/s3_mod/aws_s3_common.py", line 89, in try_special_regions
    raise last_exception
S3ResponseError: S3ResponseError: 403 Forbidden

Here is a sample script where I am trying to read as well similar error but not as extensive as yours.

import boto


from boto.s3.connection import S3Connection

conn = S3Connection(’secretstuff’,'secrethash')

mybucket = conn.get_bucket(‘somecoolbucketname’)

for key in my bucket.list()

    print key.name()
0 Karma
State of Splunk Careers

Access the Splunk Careers Report to see real data that shows how Splunk mastery increases your value and job satisfaction.

Find out what your skills are worth!