In handler 'vix-indexes':[hadoop-provider] Error while running external process, return_code=255. See search.log for more info[hadoop-provider] RuntimeException - Failed to validate path and index:Failed on local exception: com.google.protobuf.InvalidProtocolBufferException: Protocol message contained an invalid tag (zero).; Host Details "splunk_hostname_redacted/splunk_ip_address_redacted", destination host is: "hadoop_name_node_redacted":8020;
indexes.conf:
[provider:hadoopf-provider]
vix.splunk.search.splitter = ParquetSplitGenerator
vix.description = parquet_test
vix.env.HADOOP_HOME = /usr/lib/hadoop
vix.env.JAVA_HOME = /usr/java/latest
vix.family = hadoop
vix.fs.default.name = hadoop_name_node_redacted:8020
vix.mapred.job.tracker = job_tracker_redacted:8021
vix.output.buckets.max.network.bandwidth = 0
vix.splunk.home.hdfs = hdfs_user_dir_redacted/splunk
vix.splunk.home.datanode = hdfs_user_dir_redacted/tmp
vix.command.arg.3 = $SPLUNK_HOME/bin/jars/SplunkMR-s6.0-h2.0.jar
vix.splunk.search.debug = 1
vix.splunk.impersonation = 0
This could be caused if your Hadoop client jars were not a match for your Hadoop cluster. Are you sure the contents of /usr/lib/hadoop are the same distributions and version as what your cluster is using?
This could be caused if your Hadoop client jars were not a match for your Hadoop cluster. Are you sure the contents of /usr/lib/hadoop are the same distributions and version as what your cluster is using?
Yep, I think that was it. I was actually trying to connect to a different cloud that was MR2 vs. MR1. The libs were different. I'm uninstalling splunk as a gateway from one cloud and adding it on the other. I'm guessing there isn't a way to be a gateway to two clouds especially if they are different versions?
Yes there is, you just have to: a) have the appropriate hadoop libs available on the Hunk node and b) set HADOOP_HOME differently for the different providers belonging to the different versions of the clusters