- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Is there an Integration between Spark and Splunk?
Hello everyone,
I want to integrate Spark and Splunk, using Spark to process searches faster.
With Splunk Analytics for Hadoop, I can set a HDFS as a Virtual Indexer, but this uses a Hadoop/MapReduce to get data. How can I use Spark instead of? Thanks everyone
Ps: i tried to sign up and get Splunk MLTK Connector for Apache Spark, but apparently, there are no vacancies.
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Right, that was the basic design -
-- With Splunk Analytics for Hadoop, i can set a hdfs as a Virtual Indexer, but this uses a Hadoop/MapReduce to get data
I wonder if the product has evolved... @rdagan_splunk can shed light on this topic.
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

3 great presentations that i hope will take you wherever you need:
https://conf.splunk.com/files/2017/slides/advanced-analytics-with-splunk-using-apache-spark-machine-...
https://conf.splunk.com/files/2016/slides/splunk-and-open-source-integrations-with-spark-solr-hadoop...
https://conf.splunk.com/files/2017/slides/unleash-your-machine-data-with-context-from-historical-and...
hope it helps
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thank you for this, but these solutions propose using spark only to process data, but still use Splunk Analytics
for Hadoop to search the data, so it still use MapReduce, or not?
Thanks again
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

Hi, these solutions actually used Splunk DB Connect with JDBC to talk to Spark SQL
