Splunk Search

Why are we getting error "Server has invalid Kerberos principal" when running a search that triggers MapReduce?

eangeles
Path Finder

With Hunk, we're getting an invalid Kerberos principal when we try to run a search that triggers MapReduce. The streaming of results return properly when we search using index="some_index", but will error out once any post processing piping "|" is applied to it.

SplunkMR - Failed to start MapReduce job.  Please consult search.log for more information. Message: [ Failed to start MapReduce job, name=SPLK_<hunk_server>_1435283732.28_0 ] and [ Failed on local exception: java.io.IOException: java.lang.IllegalArgumentException: Server has invalid Kerberos principal: rm/<master_server>@<REALM>; Host Details : local host is: "<hunk_server>/192.X.X.X"; destination host is: "<master_server>":8050;  ]

Stacktrace says that it is caused by an invalid Kerberos principal, but we've verified that the Kerberos ticket is valid and that the hdfs user is able to browse the hadoop filesystem.

Caused by: java.lang.IllegalArgumentException: Server has invalid Kerberos principal

Here's the config for the provider:

[provider:hadoop]
vix.command.arg.3 = $SPLUNK_HOME/bin/jars/SplunkMR-s6.0-hy2.0.jar
vix.env.HADOOP_HOME = /usr/lib/hadoop
vix.env.JAVA_HOME = /usr/lib/jvm/jre-1.7.0
vix.family = hadoop
vix.fs.default.name = hdfs://<master_server>:8020
vix.mapreduce.framework.name = yarn
vix.splunk.home.hdfs = /user/hdfs/splunk/
vix.yarn.resourcemanager.address = <master_server>:8050
vix.yarn.resourcemanager.scheduler.address = <master_server>:8030
vix.splunk.impersonation = 1
vix.mapred.job.queue.name = default
vix.dfs.namenode.kerberos.principal = nn/_HOST@<REALM>
vix.hadoop.security.authentication = kerberos
vix.hadoop.security.authorization = 1
vix.kerberos.keytab = /etc/security/keytabs/hdfs.headless.keytab
vix.kerberos.principal = hdfs@<REALM>
vix.yarn.nodemanager.principal = yarn/_HOST@<REALM>
vix.yarn.resourcemanager.principal = yarn/_HOST@<REALM>
0 Karma
1 Solution

eangeles
Path Finder

This was due to a setting in the /etc/splunk-launch.conf.default file to redirect the location of the SPLUNK_DB variable to our array mount point (SPLUNK_DB=/splunk-data). Since it was a folder outside of the SPLUNK_HOME, Splunk cannot guarantee it has permissions on all hadoop nodes to create this folder, and on the datanodes it would try to create the folder at / instead of /tmp/splunk.

Solution: Remove setting and symlink /splunk-data to $SPLUNK_HOME/var/lib/splunk. I believe the issue was also fixed by upgrading to Splunk 6.3.1

View solution in original post

0 Karma

eangeles
Path Finder

This was due to a setting in the /etc/splunk-launch.conf.default file to redirect the location of the SPLUNK_DB variable to our array mount point (SPLUNK_DB=/splunk-data). Since it was a folder outside of the SPLUNK_HOME, Splunk cannot guarantee it has permissions on all hadoop nodes to create this folder, and on the datanodes it would try to create the folder at / instead of /tmp/splunk.

Solution: Remove setting and symlink /splunk-data to $SPLUNK_HOME/var/lib/splunk. I believe the issue was also fixed by upgrading to Splunk 6.3.1

0 Karma

Ledion_Bitincka
Splunk Employee
Splunk Employee

Usually the headless hdfs kerberos principal does is a Service Principal and does not have access to the compute resources of Hadoop. Have you tried using a "User Principal"

vix.kerberos.keytab = /etc/security/keytabs/hdfs.headless.keytab
vix.kerberos.principal = hdfs@<REALM>
0 Karma

eangeles
Path Finder

Yes, we have tried the same scenario with a "hunk" user. We had confirmed that the hunk user has the ability to read/write to/from HDFS as well as run MR jobs (via Pig). Hunk configuration was modified appropriately, but still getting the same issues as above.

0 Karma

splunkIT
Splunk Employee
Splunk Employee

@eangeles, were you able to root cause this issue? I am hitting similar errors and would like some suggestions. Thanks.

0 Karma

hyan_splunk
Splunk Employee
Splunk Employee

The default port number for resource manager is 8032, please make sure your cluster setting is changed to 8050.
yarn.resourcemanager.address = :8050

Also, please make sure the hdfs.headless.keytab file is saved on /etc/security/keytabs/ directory of your Splunk Search Head and the principal name is correct.
vix.kerberos.principal = hdfs@

0 Karma

hyan_splunk
Splunk Employee
Splunk Employee

Are you able to run an example MR job with the principal from hadoop cli on Splunk Search Head?

0 Karma

eangeles
Path Finder

Yep, that's one of the steps we took in debugging was to make sure that the user had the ability to trigger a MR job as well have as read/write access to the directory.

0 Karma

eangeles
Path Finder

Double checked and confirmed that the cluster settings have the port set to 8050 for the resource manager. Also confirmed that the hdfs keytab file is saved in the correct directory and that the name of the principal is correct.

Thanks for taking time to reply!

0 Karma

apatil_splunk
Splunk Employee
Splunk Employee

I see that vix.splunk.impersonation = 1 in provider.
Can you check if Splunk user whom you are trying to impersonate as, has write access on the hdfs at
vix.splunk.home.hdfs = /user/hdfs/splunk/

0 Karma

elin
Splunk Employee
Splunk Employee

Can you try adding the following to your config:

vix.java.security.krb5.kdc = [kerberos server name]
vix.java.security.krb5.realm = [kerberos server name]

0 Karma

eangeles
Path Finder

Hi elin - we've tried adding those properties into the configs to no avail. It resulted in the same error as described above. Thanks for taking time to respond!

0 Karma
Get Updates on the Splunk Community!

Index This | I am a number, but when you add ‘G’ to me, I go away. What number am I?

March 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

What’s New in Splunk App for PCI Compliance 5.3.1?

The Splunk App for PCI Compliance allows customers to extend the power of their existing Splunk solution with ...

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...