We are having problems with s3 bucket injection. Our corporate security policy states we need to keep 2 years of our ELb logs in s3. So we lifecycle them into glacier. This unfortunately means that when we try to connect to them with the AWS app, we get a whole lot of files with a different storage type and this causes thousands of errors. For some reason, this kills the process so we can't get new data. When I look at the source, I do see the number going up, but we can't seem to get anything newer than the first time this ran which was about a week ago.
These are the sanitized errors we see constantly scrolling through aws_s3.log:
2015-10-20 22:55:12,240 ERROR pid=22473 tid=MainThread file=aws_s3.py:stream_events:868 | Incomplete: bucket: 'OURBUCKET' key: u'OURPREFIX/AWSLogs/OURACCOUNT#/elasticloadbalancing/OURZONE/2014/05/19/OURACCOUNT#_elasticloadbalancing_OURZONE_OURELBNAME_20140519T0100Z_10.241.4.64_3indsmej.log' etag: "2464d51635ffc8954e36b59f479f72cc" attempt_number: 2 orig_size: 1119509 bytes_streamed: 0 total_bytes_streamed: 0 Exception: S3ResponseError: 403 Forbidden - InvalidObjectState - The operation is not valid for the object's storage class