Splunk has to fit into a structured and unstructured world without duplicating effort. Splunk currently serves as a standard log collection mechanism, but I would like to see Splunk support the feeding of additional third party solutions.
To this end, I would like to know if the following is supported or if anyone has tried these options:
1) Roll coldToFrozen AND warmToCold buckets into hadoop and keep a copy indexed into Splunk. This data would need to be searchable via Splunk Enterprise and hadoop utilities like pig or hive.
2) I would like Splunk, in the form of a UF or an HEC, collect data, and I would like for the data to be routable to Kafka in addition to Splunk Enterprise. I was thinking of having an HF layer, which already exists at my company, route to Fluentd via Splunk's tcp or UDP out. Fluentd could then route to Kafka.
The other option is to remove the UF or an HEC and replace with Fluentd. Fluentd has community developed plug-ins to send to Kafka, HDFS, and Splunk Enterprise. I have not tested this solution.
Any thougths from anyone?
... View more