All Apps and Add-ons

Hunk giving Exception in thread "main" java.lang.NoSuchFieldError: FACTORY

toabhishek16
New Member

Hi all,

thanks for supporting for my previous questions/problems.

I have hive tables (hive-0.14 ) in ORC format. while trying to access through Hunk it is giving below error.

HiveMetaStoreClient - Connected to metastore.
05-07-2015 13:54:32.146 INFO ERP.tmglogsproviderorc - Exception in thread "main" java.lang.NoSuchFieldError: FACTORY
05-07-2015 13:54:32.146 INFO ERP.tmglogsproviderorc - at com.splunk.datasource.hive.HivePPDUtil.getHivePPDInfo(HivePPDUtil.java:80)
05-07-2015 13:54:32.146 INFO ERP.tmglogsproviderorc - at com.splunk.mr.input.HiveSplitGenerator.getTableSchema(HiveSplitGenerator.java:203)
05-07-2015 13:54:32.146 INFO ERP.tmglogsproviderorc - at com.splunk.mr.input.HiveSplitGenerator.sendSplitToAcceptor(HiveSplitGenerator.java:67)
05-07-2015 13:54:32.146 INFO ERP.tmglogsproviderorc - at com.splunk.mr.input.FileSplitGenerator.generateSplits(FileSplitGenerator.java:79)
05-07-2015 13:54:32.146 INFO ERP.tmglogsproviderorc - at com.splunk.mr.input.VirtualIndex$FileSplitter.accept(VirtualIndex.java:1418)
05-07-2015 13:54:32.146 INFO ERP.tmglogsproviderorc - at com.splunk.mr.input.VirtualIndex$FileSplitter.accept(VirtualIndex.java:1396)
05-07-2015 13:54:32.146 INFO ERP.tmglogsproviderorc - at com.splunk.mr.input.VirtualIndex$VIXPathSpecifier.addStatus(VirtualIndex.java:576)
05-07-2015 13:54:32.146 INFO ERP.tmglogsproviderorc - at com.splunk.mr.input.VirtualIndex$VIXPathSpecifier.listStatus(VirtualIndex.java:609)
05-07-2015 13:54:32.146 INFO ERP.tmglogsproviderorc - at com.splunk.mr.input.VirtualIndex$Splitter.generateSplits(VirtualIndex.java:1566)
05-07-2015 13:54:32.146 INFO ERP.tmglogsproviderorc - at com.splunk.mr.input.VirtualIndex.generateSplits(VirtualIndex.java:1485)
05-07-2015 13:54:32.146 INFO ERP.tmglogsproviderorc - at com.splunk.mr.input.VirtualIndex.generateSplits(VirtualIndex.java:1437)
05-07-2015 13:54:32.146 INFO ERP.tmglogsproviderorc - at com.splunk.mr.input.VixSplitGenerator.generateSplits(VixSplitGenerator.java:55)
05-07-2015 13:54:32.146 INFO ERP.tmglogsproviderorc - at com.splunk.mr.SplunkMR$SearchHandler.streamData(SplunkMR.java:674)
05-07-2015 13:54:32.146 INFO ERP.tmglogsproviderorc - at com.splunk.mr.SplunkMR$SearchHandler.executeImpl(SplunkMR.java:936)
05-07-2015 13:54:32.146 INFO ERP.tmglogsproviderorc - at com.splunk.mr.SplunkMR$SearchHandler.execute(SplunkMR.java:771)
05-07-2015 13:54:32.146 INFO ERP.tmglogsproviderorc - at com.splunk.mr.SplunkMR.runImpl(SplunkMR.java:1518)
05-07-2015 13:54:32.146 INFO ERP.tmglogsproviderorc - at com.splunk.mr.SplunkMR.run(SplunkMR.java:1300)
05-07-2015 13:54:32.147 INFO ERP.tmglogsproviderorc - at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
05-07-2015 13:54:32.147 INFO ERP.tmglogsproviderorc - at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
05-07-2015 13:54:32.147 INFO ERP.tmglogsproviderorc - at com.splunk.mr.SplunkMR.main(SplunkMR.java:1546)
05-07-2015 13:54:32.173 INFO ERPSearchResultCollector - ERP peer=tmglogsproviderorc is done reading search results.

Hunk is able to access data from hive tables stored in Text format but it is showing garbage characters in events.

please help me, am I missing any configuration.

Thanks
Abhishek

Tags (2)
0 Karma

hyan_splunk
Splunk Employee
Splunk Employee

I suspect hive-0.14 jar is still picked up by Hunk. You can turn on debug mode by setting "vix.splunk.search.debug=1" in provider config, run a search and check search.log. Looking for these two lines:

05-10-2015 21:49:53.499 DEBUG ERP. - HADOOP_CLASSPATH=...
05-10-2015 21:49:55.109 INFO ERP. - SplunkMR - Setting custom jars=...

and make sure your hive-0.12 or 0.13 jars are on the lists and before hive-0.14 jars if there is any on the lists.

If they are, then you should open a support case and upload a diag.

0 Karma

toabhishek16
New Member

Hi Hyan_splunk

I tried your second suggestion as I have all the tables in ORC format.

2) replace hive-0.14 jars specified in Hunk vix.env.HUNK_THIRDPARTY_JARS with lower version hive to avoid turning off ORC predicate pushdown. Note, if your table column defined CHAR type introduced at 0.13, you can only downgrade to 0.13.

but still it is giving same error.
I tried with Hive-0.12 and Hive-0.13 both.

0 Karma

toabhishek16
New Member

thanks hyan_splunk,

i will try suggested workarounds.

it will be great help if you can give any hint about how much time it will take to fix the bug.

Thanks
Abhishek

0 Karma

hyan_splunk
Splunk Employee
Splunk Employee

I assume you replaced the hive-0.12 jars Hunk shipped with your hive-0.14 ones. This problem is due to hive-0.14 introduced a non-backward compatible API. I will file a bug and fix it later. You have two workarounds:

1) set property "vix.splunk.search.splitter.hive.ppd=0" in your provider stanza. However, this will turn off ORC Predicate Pushdown feature.
2) replace hive-0.14 jars specified in Hunk vix.env.HUNK_THIRDPARTY_JARS with lower version hive to avoid turning off ORC predicate pushdown. Note, if your table column defined CHAR type introduced at 0.13, you can only downgrade to 0.13.

0 Karma
Get Updates on the Splunk Community!

What's new in Splunk Cloud Platform 9.1.2312?

Hi Splunky people! We are excited to share the newest updates in Splunk Cloud Platform 9.1.2312! Analysts can ...

What’s New in Splunk Security Essentials 3.8.0?

Splunk Security Essentials (SSE) is an app that can amplify the power of your existing Splunk Cloud Platform, ...

Let’s Get You Certified – Vegas-Style at .conf24

Are you ready to level up your Splunk game? Then, let’s get you certified live at .conf24 – our annual user ...