All Apps and Add-ons

Why is the SQS-based S3 queue mod input throwing errors? For ex: message="Failed to ingest file", message="An error occurred while processing the message."?

smitra_splunk
Splunk Employee
Splunk Employee

Hi All,

I had been receiving aws:config logs without issues.
The source values had this format -

s3://mycompany-security-config-bucket-us-east-1/mycompany.IO-config-logs/AWSLogs/183952688868/Config/ap-northeast-1/2018/2/6/ConfigSnapshot/183952688868_Config_ap-northeast-1_ConfigSnapshot_20180206T193155Z_a49edd38-e8a0-4c23-b8d1-d6b60d6fe323.json.gz 

s3://mycompany-security-config-bucket-us-east-1/mycompany.IO-config-logs/AWSLogs/183952688868/Config/us-east-1/2018/2/6/ConfigSnapshot/183952688868_Config_us-east-1_ConfigSnapshot_20180206T194448Z_732959c9-1a52-4798-9fc7-ad4e6649aa69.json.gz 

s3://mycompany-security-config-bucket-us-east-1/mycompany.IO-config-logs/AWSLogs/755496014772/Config/ap-northeast-1/2018/2/6/ConfigSnapshot/755496014772_Config_ap-northeast-1_ConfigSnapshot_20180206T184812Z_369cbc54-ef2e-4f78-bfb6-2123b0159bfb.json.gz 

s3://mycompany-security-config-bucket-us-east-1/mycompany.IO-config-logs/AWSLogs/755496014772/Config/us-east-1/2018/2/6/ConfigHistory/755496014772_Config_us-east-1_ConfigHistory_AWS::CloudWatch::Alarm_20180206T192158Z_20180206T192158Z_1.json.gz 
But all of a sudden the SQS queue started backing up because the modinput started throwing errors.  Looking at splunk_ta_aws_aws_sqs_based_s3_AWS-Config.log , I see many errors like the following -

2018-02-06 23:14:09,955 level=ERROR pid=7630 tid=Thread-1 logger=splunk_ta_aws.modinputs.sqs_based_s3.handler pos=handler.py:_ingest_file:282 | datainput="AWS-Config" start_time=1517958813, message_id="d5e32a88-d283-4351-a3e5-407817759895" ttl=300 created=1517958849.16 job_id=926f7521-68f1-4c2d-bdf6-4ce4729575bd | message="Failed to ingest file." uri="**s3://takeda-security-config-bucket-us-east-1/logs/2018-02-06-19-21-49-E65A3727C20B813A**"
2018-02-06 23:14:09,955 level=CRITICAL pid=7630 tid=Thread-1 logger=splunk_ta_aws.modinputs.sqs_based_s3.handler pos=handler.py:_process:265 | datainput="AWS-Config" start_time=1517958813, message_id="d5e32a88-d283-4351-a3e5-407817759895" ttl=300 created=1517958849.16 job_id=926f7521-68f1-4c2d-bdf6-4ce4729575bd | message="An error occurred while processing the message."
Traceback (most recent call last):
  File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws/modinputs/sqs_based_s3/handler.py", line 258, in _process
    self._ingest_file(cache, record, headers)
  File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws/modinputs/sqs_based_s3/handler.py", line 277, in _ingest_file

I don't have access to AWS console, what could be the problem ? Can an AWS Admin change the setting where the bucket name is absent from the machine ? Any hint is appreciated.

Thanks!

-Shreedeep

ameyap16
Engager

@smitra_splunk Were you able to determine what the problem is? I am seeing the exact same error for a lot of Config log files.

0 Karma
Get Updates on the Splunk Community!

Troubleshooting the OpenTelemetry Collector

  In this tech talk, you’ll learn how to troubleshoot the OpenTelemetry collector - from checking the ...

Adoption of Infrastructure Monitoring at Splunk

  Splunk's Growth Engineering team showcases one of their first Splunk product adoption-Splunk Infrastructure ...

Modern way of developing distributed application using OTel

Recently, I had the opportunity to work on a complex microservice using Spring boot and Quarkus to develop a ...