Archive
Highlighted

Hunk with Yarn - Class Not Found Exception

Splunk Employee
Splunk Employee

Running Hunk with Yarn and seeing these issues:

ERROR .. - Error while waiting for MapReduce job to complete, jobid=[! http://:8088/proxy/application/ job], state=FAILED, reason=Application application failed 1 times due to AM Container for appattempt_ exited with exitCode: 1 due to:

So you go to: http://:8088 --> Drill down to your specific Job --> Select ' Log ' (normally on the lower right side of the page)

The log may be useful and show " Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/hadoop/mapreduce/v2/app/MRAppMaster Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.mapreduce.v2.app.MRAppMaster .. "

Or the log may not show us Anything useful .. For example ' The requested application exited before setting a tracking URL. '

Tags (3)
Highlighted

Re: Hunk with Yarn - Class Not Found Exception

Splunk Employee
Splunk Employee

To fix this issue:
1) Go to " http://:8088/conf "
2) Find the Key " yarn.application.classpath "
*** This exact same value should be also found inside your Hadoop nodes (server) -> yarn-site.xml
3) Copy the value for this yarn class path
4) Go to the Hunk Provider settings and add the flag " vix.yarn.application.classpath " --> Paste the value from ' yarn.application.classpath '
For Example,
vix.yarn.application.classpath = $HADOOPCONFDIR,$HADOOPCOMMONHOME/,$HADOOPCOMMONHOME/lib/,$HADOOPHDFSHOME/,$HADOOPHDFSHOME/lib/,$HADOOPMAPREDHOME/,$HADOOPMAPREDHOME/lib/,$YARNHOME/*,$YARNHOME/lib/*

View solution in original post

Highlighted

Re: Hunk with Yarn - Class Not Found Exception

Splunk Employee
Splunk Employee

Hi,

I have seen another error where this solution might be very helpful:

If you install Splunk (with Splunk Analytics for Hadoop license, or just you want to use Hadoop Data Roll) on a separate SH and in addition are just using the "Hadoop Client Libraries" for your distribution (Apache Hadoop, Hortonworks HDP, Cloudera CDH, ...) you might need to copy mapred-site.xml and yarn-site.xml from a NodeManager or DataNode of your cluster to
/path/to/your/Hadoop Packages/etc/hadoop/conf/

Otherwise you need to set a couple of different parameters in your Splunk Hadoop Provider (vix.yarn..., vix.mapred..., vix.mapreduce...)

If
index=hadoop-index | head 10
returns data...
HDFS is working

if (in SmartMode!!!!!!)
index=hadoop-index | stats count
returns a result without errors... you've done the right thing.

And don't forget this cool bog post:

https://www.splunk.com/blog/2014/05/14/hunkonhunk.html

HTH,

Holger

0 Karma