Trying to determine Splunk's value proposition with its universal data platform, and key reasons you're better than the competition?
Just from expierence, the competition I've worked/considered are ( I hate when people call as log management, but I consider them as big data tools)
- ELK (ElasticSearch - Logstash - kibana)
Some key things from my experience
- Splunk is highly enterprise-class with very easy to install. Hadoop is a real pain unless you have good java developers. ELK is really good and true competition to Splunk. SumoLogic is good too, but I've limited experience
- Splunk is both on-premises + cloud. Sumologic is cloudOnly which may not be good for every organisation. ELK and Hadoop variants i've worked only on premises.
- Agents/Datashippers. None of the agents out there can be compared to Splunk Universal Forwarders. This makes a huge difference in enterprise level. ELK Beats sounds promising though, but no-where compared to Splunk UF in areas of load balancing/consistency/secure transfer/cross platform/run-as functionalities
- Though Hadoop is very extensible, literally Hadoop takes 2-3 weeks to write a simple Demo/POC (when you can do in minutes for Splunk/ELK). So time to market is huge in Hadoop/Spark and requires good java developers in your team. if your organisation is huge, using hadoop i can bet the project to last 2 years.
- Cost. Splunk costs approx approx $1000 for 1GB (one time cost) + support cost. Hadoop is free, but dont ever think of going live without Cloudera/Hortonworks & development time which is a costly one. ELK seems to be the winner here, easy to setup but scaling is quite tricky.
- Learning Curve - Splunk is easy to learn and pretty consistent. ELK changes every year to new ideology which could be tricky for employees. Hadoop is wild wild west with layers/development going too fast for enterprise level and a huge learning curve.
Options you could consider
- Truly Enterprise Level data analytics = Splunk Enterprise
- Low cost for SME = ELK or Splunk light
- Highly flexible, complete new application development = Hadoop/Spark and variants or Splunk Enterprise with SDK's
- Completely SAAS option - Splunk or SumoLogic
- Mix options = Use ELK/Hadoop for dumping data which are less relevant, but Splunk for real core and valuable datasets
I would emphasize scalable and usable. One (1) person (not a software developer) with basic network, infrastructure, and hardware support from the facility can install Splunk indexers, search heads, and hundreds of forwarders in a relatively short amount of time. Splunk (paid) software is supported very well by Splunk Inc. Splunk (paid and free) software is supported very well by the Splunk community (Splunk Answers). Splunk is also very well documented with Admin manuals, config references, release notes, etc...- if you can read, then you can learn Splunk. With Splunk and basic internal support, one person can be in a position to answer questions like “what happened, when, and who did it?” in an environment with multiple networks, dozens of servers, and hundreds of users. For scalability from small networks to very large enterprises it might take more than one person – depends on the person and the enterprise.