All Apps and Add-ons

Error from Splunk Add-on for AWS


Dear Team,

This is my setup for analyzing log from S3:

1 - splunk enterprise 8.1 for standalone VM.

2 - S3 IAM role for bucket with logs.

3 - I installed Splunk Add-on for AWS

4 - for first run, everything is okay. However, i shutdown the VM, and increased the RAM for this VM. and here the problem start:

Query from the health Check:

Index=“_internal”  (host=“*”)   (sourcetype=aws:s3:log OR sourcetype=aws:logs:log OR sourcetype=aws:sqsbaseds3:log OR sourcetype=aws:description:log OR sourcetype=aws:cloudwatch:log)   (datainput=“*”)   level=ERROR                  message=“Failed to collect data through generic S3.” | fillnull value=“” ErrorCode, ErrorDetail                 | eval ErrorDetail = if((ErrorDetail == “” or ErrorDetail == “‘’“) and !isnull(message), message, ErrorDetail)


2020-10-24 01:23:18,036 level=ERROR pid=25464 tid=Thread-7 logger=splunk_ta_aws.modinputs.generic_s3.aws_s3_data_loader | datainput=“bucket-log” bucket_name=“logs-storage” | message=“Failed to collect data through generic S3.” start_time=1603473783 job_uid=“f852cf4b-f1fe-4197-bf93-3494f3d2adb7"
Traceback (most recent call last):
  File “/opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws/modinputs/generic_s3/”, line 86, in index_data
  File “/opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws/modinputs/generic_s3/”, line 107, in _do_index_data
  File “/opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws/modinputs/generic_s3/”, line 153, in collect_data
  File “/opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws/modinputs/generic_s3/”, line 233, in _discover_keys
    for key in keys:
  File “/opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws/modinputs/generic_s3/”, line 227, in get_keys
    for key in keys:
  File “/opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws/modinputs/generic_s3/”, line 196, in bucket_lister
  File “/opt/splunk/etc/apps/Splunk_TA_aws/bin/3rdparty/python3/solnlib/”, line 172, in wrapper
    raise last_ex
  File “/opt/splunk/etc/apps/Splunk_TA_aws/bin/3rdparty/python3/solnlib/”, line 159, in wrapper
    return func(*args, **kwargs)
  File “/opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws/modinputs/generic_s3/”, line 186, in get_all_keys
  File “/opt/splunk/etc/apps/Splunk_TA_aws/bin/3rdparty/python3/boto/s3/”, line 474, in get_all_keys
    ‘’, headers, **params)
  File “/opt/splunk/etc/apps/Splunk_TA_aws/bin/3rdparty/python3/boto/s3/”, line 412, in _get_all
    response.status, response.reason, body)
boto.exception.S3ResponseError: S3ResponseError: 400 Bad Request
<?xml version=“1.0" encoding=“UTF-8”?>
<Error><Code>ExpiredToken</Code><Message>The provided token has expired.</Message><Token-0>xxxx</Token-0><RequestId>aaaaaaa</RequestId><HostId>Ibbbbb</HostId></Error>


I would like to know what the root cause of this? and how to fix it?

Labels (1)
0 Karma

Loves-to-Learn Lots

@kennetkline I'm using Hyper-V manager, and as far as I know, the time setup on both the VM and the server are correct. It takes the time of the host machine. Is there any place that i should check Ken?

0 Karma


thank you Sir. Let me try again 🙂

0 Karma

Path Finder

Ok;  VM,  

Time is off too far;  Says expired token in your log.

I have seen too often;  staff don't setup (ntp) on the Esxi servers.  Then VM's around build and powered on with the time of (years in the future)   and jacking up images and after patching etc they fix the timestamps and stuff is crap.  This impacts ad /etc.

Check and enable NTP on (Esxi)  / Check enable NTP sync on the VM, check the time.  then try again

0 Karma
State of Splunk Careers

Access the Splunk Careers Report to see real data that shows how Splunk mastery increases your value and job satisfaction.

Find out what your skills are worth!