The environment has 1 search head, and 2 indexers running Splunk 7.2.1. The Splunk App for AWS (5.1.2) and Splunk Add-on (4.6.0), are installed. The add-on is loaded on the sh to manage the inputs.
We have setup multiple inputs that are functioning correctly, including the cloudwatch AWS/EC2, EBS, RDS inputs, however when attempting to set up cloudwatch AWS/S3 namespace we are unable to ingest data.
We have verified via the docs that all index macros are config'd correctly, as well as aws permissoins are set correctly.
We see no errors in the aws:cloudwatch:logs
(index=_internal sourcetype=aws:cloudwatch:log )
We see in the logs that the following steps are processing for the AWS/S3 inputs without error:
Our inputs.conf is listed below. Is there something missing in our configurations below, or any additional thoughts
regarding troubleshooting that we could attempt?
Thanks for any help.
inputs.conf:
[aws_cloudwatch://XXXXXXXX_aws_cloudwatch_56f74518-6038-4bcd-bf6d-c51f687860aa]
aws_account = XXXXXXXX
aws_region = us-west-1
index = aws
metric_dimensions = [{"StorageType":["AllStorageTypes"],"BucketName":["rmp-files"]}]
metric_names = ["NumberOfObjects"]
metric_namespace = AWS/S3
period = 60
polling_interval = 3600
sourcetype = aws:cloudwatch
statistics = ["Average"]
use_metric_format = false
[aws_cloudwatch://XXXXXXXX_aws_cloudwatch_6ff56c72-8097-460d-bbfc-a0e126f6953c]
aws_account = XXXXXXXX
aws_region = us-west-1
index = aws
metric_dimensions = [{"StorageType":["StandardStorage"],"BucketName":["rmp-files"]}]
metric_names = ["BucketSizeBytes"]
metric_namespace = AWS/S3
period = 60
polling_interval = 3600
sourcetype = aws:cloudwatch
statistics = ["Average"]
use_metric_format = false
I am facing the same issue and I am out of my troubleshooting options , I hope someone can help here .
Thanks
we have the same issue here
I am not sure if it will work for you or not, but here is what I did that got around the issue.
In the Add-on configuration of inputs,
-edit the CloudWatch input and then "Edit in advanced mode" and remove the AWS/S3 Namespace totally and save/update
-Create a new CloudWatch input (slightly different name) and delete all of the Namespaces except AWS/S3
--- in this new input for just AWS/S3 change the period value under "Advanced Settings" to 3600
The value for Period has to be the same for all of related Namespaces so you cannot change just this one without creating a new input, even in the inputs.conf file (it does not like any variation in period within the same defined input) and the S3 bucket CloudWatch metrics are only reported once per day, that may be the issue.
thanks buddy. saved the day!
I thought there was no way this solution would work but sure enough, it fixed things right up. Running v4.6.1 of Splunk Add-on for AWS in Splunk Cloud. Now getting all metrics plus S3 NumberOfObjects and BucketSizeBytes.
dpsoukup,
Your suggestion fixed my issue! I greatly appreciate your assistance!
Thanks
Thank you dpsoukup, this fixed the original issue. Much appreciated.
It resolved our issue as well, thanks!!