I'm exploring Splunk and it seems it is a more generic tool than Omniture. We have a need to do our "sales funnel" analysis from end to end customer journey. We also need to integrate with our data warehouse in our organization.
Compared to Omniture, which requires tags to be inserted in a web page, what's the methodology of using Splunk? We don't use Apache or anything, we use a content management system with Akamai in front of it. Akamai has some user logs.
We also need to pull in data from other data sources such as marketing or our operations team inhouse.
In this scenario, is my understanding correct that we will need to create a ton of custom code to get the basic web analytics: where people come from, where they went, how much time they spent, etc? Will we have to write queries from scratch?
In this scenario, is my understanding
correct that we will need to create a
ton of custom code to get the basic
web analytics: where people come from,
where they went, how much time they
spent, etc? Will we have to write
queries from scratch?
"Where people come from" - With the Google Maps app, the origin og a user can be determined from the client IP address in the logs, as simply as putting together a search such as the one below:
sourcetype=web_logs | geoip client_ip
This will give every event a tag of the country and city of origin, as well as the lattitude & longitude (as well as other things). No custom code required.
"Where they went" - If it's in the logs, Splunk can report it.
sourcetype=web_logs client_ip="126.96.36.199" | stats values(url)
You might not event have to write this query... it may be possible just to click through the GUI to get to the reports you're after. No custom code required.
"How long they spent" - Splunk has a built in transaction command which, once you define the start and end events or common field, will give you the duration of a transaction. A possible example...
sourcetype=web_logs | transaction session_id | table client_ip, username, duration
Once again, no custom code.
RE: pulling in data from other sources... this is Splunks bread & butter. It doesn't care about the structure of the data, it just needs to know when the event was generated to allow it to be effectively used once it's indexed.
I think you're overlooking the power & flexibility of Splunk, which may be because you've never really taken it for a spin. Splunk is a powerful platform... so a certain amount of complexity and an initial learning curve is to be expected. But once you've got your head around it, you'll find yourself getting far more out of it than what you can with a product like Omniture (IMHO)
There's an excellent introductory tutorial here: http://docs.splunk.com/Documentation/Splunk/latest/Tutorial/WelcometotheSplunktutorial
This will get your head around what Splunk is as well as (a little) or what it's capable of doing.
EDIT: Replace the word "generic" in your question with "flexible", and you're getting there 🙂