We are currently trying to set up a reliable solution for moving data from Splunk to HDFS location. This is not for archiving. We would like to move the data to HDFS location so that we can further process the data in the HDFS cluster using Apache Spark processing framework. We have looked at these options
Thanks in advance
Manu Mukundan
The best option among yours is Option 1 as you get more "pure" data from that.
But the key question here is, WHY you need the data in Splunk then? Could you have split the data before it reaches Splunk?
There is another option https://cribl.io/ logstream if you want to redirect your data before it reaches Splunk.
Also, if you're thinking of going the NiFi route I would highly recommend checking out this blog post where we compare it's performance to Cribl LogStream and show that it's performance is pretty poor.
I'm guessing you work for Cribl? Anyone that has been around the block knows vendor execute benchmarks are dishonest.
I know this because Cribl was considerably slower and buggy for our use case. It's written in Node for crying out loud!