When setting up the Virtual Indexes -> provider for Hunk, I am a bit confused about the configuration options.
Hadoop version: Hadoop 2.x Yarn
Job tracker (-> ? In 2 there is not Job Tracker... I used the Resource Manager)
Then added the following two settings:
Those values match with the yarn-site.xml
<property> <name>yarn.resourcemanager.address</name> <value>master1:8032</value> </property> <property> <name>yarn.resourcemanager.admin.address</name> <value>master1:8033</value> </property> <property> <name>yarn.resourcemanager.scheduler.address</name> <value>master1:8030</value> </property> <property> <name>yarn.resourcemanager.resource-tracker.address</name> <value>master1:8031</value> </property> <property> <name>yarn.resourcemanager.webapp.address</name> <value>master1:8088</value> </property>
Which generates the following error:
[Stor] JobStartException - Failed to start MapReduce job. Please consult search.log for more information. Message: [ Failed to start MapReduce job, name=SPLK_master1_1389788887.59_0 ] and [ Unknown rpc kind RPC_WRITABLE ]
If I enable MapReduce v1, and set the address of the JobTracker in the "Job Tracker" field:
Then it works but uses MR v1 and not YARN.
What am I missing?
It would be unusual to see a Hadoop Cluster running both YARN and MRv1 simultaneously. What version of Hadoop are you running? Are you running a commercial distribution, if so which and which version?
If you're running YARN, leave the Job Tracker field blank (this is suboptimal, we're fixing it in the next version). The docs here should help: http://docs.splunk.com/Documentation/Hunk/6.0/Hunk/SetupanHDFSprovider. Other than that, your settings look like they should work. Can you send us a pastebin of your search.log (Job->Inspect Job->scroll to bottom and click search.log).
Thanks for answer.
I removed the content of field: Job Tracker. Then the error became:
[Stor] JobStartException - Failed to start MapReduce job. Please consult search.log for more information. Message: [ Failed to start MapReduce job, name=SPLKmaster11389860452.143_0 ] and [ Does not contain a valid host:port authority: ]
As to YARN vs MR coexisting, I use Cloudera Standard 4.7.3 to manage a Hadoop cluster of 32 nodes for a lab, CDH allows for deploying MR V1 in parallel with YARN 2.
ok. So my bad. I checked the setup and adding the MR v1 service overrode YARN setup. I fixed that.
Then removing the Job Tracker field value did trigger YARN MR (2.0) which is good.
Now I got:
Stor] JobStartException - Failed to start MapReduce job. Please consult search.log for more information. Message: Error while waiting for MapReduce job to complete, jobid=[!http://master1:8088/proxy/application13898747314600002/ job1389874731460_0002], state=FAILED, reason=
or ERROR ChunkedOutputStreamReader - Invalid header line="3681388,1360210..."
Additional info, splunk creates temp info from splunkMR/dispatch:
Cannot create username mapping file: /tmp/splunk/master1/splunk/etc/users/users.ini: Permission denied
Cannot open file=/tmp/splunk/master1/splunk/etc/users/users.ini for parsing: Permission denied
Error opening username mapping file: /tmp/splunk/master1/splunk/etc/users/users.ini
Cannot initialize: /tmp/splunk/master1/splunk/etc/system/metadata/local.meta: Permission denied
/tmp/splunk/master1/splunk/var/run/splunk/dispatch/SplunkMRattempt13898747314600008m0000000 was not created before dispatch process was created
OK I got it... The first ran with MR v1 created a /tmp/splunk on each datanode and with access rights set as mapred:mapred. As a result, user "yarn" could not write there... So I 777 this folder on all nodes.
Seems to work now.
If I delete the /tmp/splunk will it be re-created automatically now? ie even if I have ran a few searches?