I am trying to setup a design for our CloudTrail logging that drops all of our Account A CloudTrail logs in an Account B S3 bucket. The idea being that in the event of an unauthorized intrusion in Account A the logs are preserved in Account B.
AWS has the built-in functionality to do this but the ACLs on the actual JSON files aren't made to automatically match the bucket. So the bucket policies that I created to get Account A access to the bucket in Account B don't work due to ACLs on the JSON files.
Just curious if anyone has edited the Splunk App for AWS python code to perform additional actions. My idea would be to either edit the ACLs of the file prior to acquiring the JSON file or assume a role prior to accessing.
"Account A the logs are preserved in Account B." < If you simply added keys from both accounts to your config and configured CloudTrail inputs from both, this would preserve data from both accounts in Splunk. No real need to move anything between accounts here and if so, it's something you'd likely want to ask in an aws forum or chat.
"Account A the logs are preserved in Account B." < If you simply added keys from both accounts to your config and configured CloudTrail inputs from both, this would preserve data from both accounts in Splunk. No real need to move anything between accounts here and if so, it's something you'd likely want to ask in an aws forum or chat.
Yeah, I was thinking about this wrong (skinning cats and what not). I ended up subscribing an SQS queue in Account B to the SNS topics in Account A. Then adding the keys from Account B to Splunk App from AWS. Thanks for the mental nudge!