We are not Splunking our Akamai logs, we we are splunking our Amazon Cloudfront logs. And we are about to start splunking logs from our secondary CDN (Limelight).
Cloudfront logs are written once per hour. We have crons that check for new logs twice per hour. So while this is not real time, its pretty good. And things get indexed in near real time once the the logs are pulled down.
We treat these much like ou Apache web logs. And are using this for lots of interesting things.
I'm not doing anything magical with the Akamai logs, but we do ingest them into splunk. We miss the realtime access to our logs that we get from our own servers, but it's especially important to have insight into the data since we don't own the webservers. In other words, we know something is up with foo.com when the CPU or memory spikes up radically.
With Akamai, since we don't own or have to worry about the hardware, the access data has a higher importance. It's critical to monitor the traffic so we aren't surprised by a large increase in traffic that might have been seen in other ways.
Splunk plus Akamai is a great combination for us- we simply had to receive the logs from Akamai (on an FTP server) and point Splunk at that directory. Basic searches and saved Advanced Charting searches give our users the visibility they need. Saved searches with alerts (email if traffic falls below X) is very handy.
I too want to ingest akamai logs and use them in splunk app for akamai in our setup we have 1 search head, 2 indexer and >50 forwarders. But i am unable to understand the splunk documentation to do so. Can you please help me setting up HTTP Event Collector configuration?
Thanks in advance
Actually I am asking this question for someone else and, yes, they are wanting to ingest the access logs and understand what useful web analytics they can do with the data once its ingested, other than what you might do with apache or IIS access logs, if any