All Apps and Add-ons

Splunk and AWS CloudFront data

Explorer

I am attempting to setup a CloudFront data input using the Splunk app for AWS. My steps thus far:
- Create/Use a distribution accessible through the AWS console
- Turn on logging for the distribution and assign to an s3 bucket
- Create the data input input in splunk

AWS account: <selected mine>
AWS region: us-east-1
Metric namespace: AWS/CloudFront
Metric Names: ["Requests","BytesDownloaded","BytesUploaded","TotalErrorRate","4xxErrorRate","5xxErrorRate"]
Dimension Names: [{"DistributionId":"E1GGN2SAMEXDYG", "Region":"Global"}]
Metric stats: ["Average", "Sum", "SampleCount", "Maximum", "Minimum"]
Granularity: 60 (I've tried 3600 as well)
Polling: 60 (I've tried 3600 as well)

Based on everything I have seen, this should work but can't find one good example that shows successful capture of the CloudFront data.

Please advise.

0 Karma

Splunk Employee
Splunk Employee

Related to this:
You may want to refer to the following on how to onboard raw AWS CloudFront logs to Splunk:
http://answers.splunk.com/answers/311972/aws-cloudfront.html#answer-315294

0 Karma

Splunk Employee
Splunk Employee

what do you see on this search?

index=_internal sourcetype=aws:cloudwatch:log
0 Karma

Explorer

What I decided to do was to bypass the Metrics and pull the raw data from the s3 bucket. Figured that the metrics would be easy enough to replicate.

0 Karma

Explorer

I see the below in several different variations with #'s of metrics and failures

2015-07-21 02:21:01,671 INFO pid=10489 tid=QueryWorkerThread-2 file=awscloudwatch.py:mainworkloop:582 | Queried 2 metrics with 2 failures totaling 0 statistics in 0.439s before stalling.

0 Karma

Splunk Employee
Splunk Employee

ah, interesting... that means that the notifications are showing up in SQS, but the parsing process doesn't recognize them (or potentially the s3 files that the notifications point to). Support case and diag are the next steps. Is this a multiple account setup?

0 Karma