Splunk Search

Why am I getting this error in the search.log when extracting Hive data in Splunk?

maximus_reborn
Path Finder

I am getting the below error in the search.log when I am extracting hive data in Splunk.
I am using thrift metastore of hive while extracting data in Splunk.

Caused by: java.lang.VerifyError: class org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$CompleteRequestProto overrides final method getUnknownFields.()Lcom/google/protobuf/UnknownFieldSet
Tags (3)
1 Solution

kschon_splunk
Splunk Employee
Splunk Employee

That's looks like a version mis-match error. Can you confirm which version Hive version your metastore is? And can you paste here the vix.env.HUNK_THIRDPARTY_JARS property from the provider you are using to search with?

View solution in original post

kschon_splunk
Splunk Employee
Splunk Employee

That's looks like a version mis-match error. Can you confirm which version Hive version your metastore is? And can you paste here the vix.env.HUNK_THIRDPARTY_JARS property from the provider you are using to search with?

maximus_reborn
Path Finder

Are you asking about the below information. Otherwise Please let me know how to check the metastore version.
$ schematool -dbType mysql -info
Metastore connection URL: jdbc:mysql://127.0.0.1:3306/hive?createDatabaseIfNotExist=true
Metastore Connection Driver : com.mysql.jdbc.Driver
Metastore connection User: root
Hive distribution version: 1.2.0
Metastore schema version: 1.2.0

I don't have any property of vix.env.HUNK_THIRDPARTY_JARS. 😞 What value should it has?

0 Karma

kschon_splunk
Splunk Employee
Splunk Employee

It looks like your Hive version is 1.2.0. If vix.env.HUNK_THIRDPARTY_JARS is not specified in your provider, then you are using the default value, which probably assumes Hive version 0.12. Which Hadoop distribution are you using, and which version? And which version of Hunk are you using?

0 Karma

burwell
SplunkTrust
SplunkTrust

Keith, it seems this is a bug with Splunk 6.4. The etc/system/default/indexes.conf has the Hive 0.12 binaries.

vix.env.HUNK_THIRDPARTY_JARS = $SPLUNK_HOME/bin/jars/thirdparty/common/avro-1.7.7.jar,$SPLUNK_HOME/bin/jars/thirdparty/common/avro-mapred-1.7.7.jar,$SPLUNK_HOME/bin/jars/thirdparty/common/commons-compress-1.10.jar,$SPLUNK_HOME/bin/jars/thirdparty/common/commons-io-2.4.jar,$SPLUNK_HOME/bin/jars/thirdparty/common/libfb303-0.9.2.jar,$SPLUNK_HOME/bin/jars/thirdparty/common/parquet-hive-bundle-1.5.0.jar,$SPLUNK_HOME/bin/jars/thirdparty/common/snappy-java-1.1.1.7.jar,$SPLUNK_HOME/bin/jars/thirdparty/hive/hive-exec-0.12.0.jar,$SPLUNK_HOME/bin/jars/thirdparty/hive/hive-metastore-0.12.0.jar,$SPLUNK_HOME/bin/jars/thirdparty/hive/hive-serde-0.12.0.jar

0 Karma

maximus_reborn
Path Finder

Hi Kschon, I am using Hadoop 2.6 and Hive 1.2.1. I am not using any hadoop distribution. I have installed its binary from the apache mirror servers. I have done this with Hive also. Have i answered your question?

0 Karma

kschon_splunk
Splunk Employee
Splunk Employee

Hi Becky. That's correct. The default value still assumes you are using a Hadoop 1.x distribution, most of which came with Hive 0.x. If you create a new provider and change the drop-down to Yarn, it will change the value of that property to this:

vix.env.HUNK_THIRDPARTY_JARS = $SPLUNK_HOME/bin/jars/thirdparty/common/avro-1.7.7.jar,$SPLUNK_HOME/bin/jars/thirdparty/common/avro-mapred-1.7.7.jar,$SPLUNK_HOME/bin/jars/thirdparty/common/commons-compress-1.10.jar,$SPLUNK_HOME/bin/jars/thirdparty/common/commons-io-2.4.jar,$SPLUNK_HOME/bin/jars/thirdparty/common/libfb303-0.9.2.jar,$SPLUNK_HOME/bin/jars/thirdparty/common/parquet-hive-bundle-1.5.0.jar,$SPLUNK_HOME/bin/jars/thirdparty/common/snappy-java-1.1.1.7.jar,$SPLUNK_HOME/bin/jars/thirdparty/hive_1_2/hive-exec-1.2.1.jar,$SPLUNK_HOME/bin/jars/thirdparty/hive_1_2/hive-metastore-1.2.1.jar,$SPLUNK_HOME/bin/jars/thirdparty/hive_1_2/hive-serde-1.2.1.jar

This value refers to Hive 1.2.1. The reason I asked about the Hadoop version is that, unfortunately, Hive 1.x changed some method signatures, so the Splunk jar you use must be compiled against both the Hadoop version (1.x or 2.x) and the Hive version (0.x or 1.x).

Assuming that you are using Hadoop 2.x, you should be able to just add the above line to you provider stanza, and hopefully that will fix your problem. Or you could change it to point to your own Hive jars (it looks like you are using 1.2.0, and Hunk 6.4 ships with 1.2.1), but I don't think it will be necessary.

maximus_reborn
Path Finder

Thanks kschon. That solved the problem.

0 Karma

kschon_splunk
Splunk Employee
Splunk Employee

Great! If there are no other issues, can you "accept" the answer?

0 Karma
Get Updates on the Splunk Community!

What's new in Splunk Cloud Platform 9.1.2312?

Hi Splunky people! We are excited to share the newest updates in Splunk Cloud Platform 9.1.2312! Analysts can ...

What’s New in Splunk Security Essentials 3.8.0?

Splunk Security Essentials (SSE) is an app that can amplify the power of your existing Splunk Cloud Platform, ...

Let’s Get You Certified – Vegas-Style at .conf24

Are you ready to level up your Splunk game? Then, let’s get you certified live at .conf24 – our annual user ...