Hi All,
I have integrated splunk with hadoop using hadoop connect app in my search head instance [non-clustered search head] and the data export to hadoop is working fine. As part of this one of the steps is to copy the hadoop related details in to krb5.conf, which i did in the default file /etc/krb5.conf.
Now looks like my changes overwrote the previous existing contents due to which some of the power broker roles on the machine are no longer working and the splunk admin is no longer able to sudo due to which some of his monitoring scripts are failing as well.
We got to know that power broker login also makes use of the krb5.conf which comes built in with machine @ my organization. On checking with our Hadoop admin we got to know that appending of the Hadoop related configs to default krb5.conf isn't an option [as cross relams are not supported currently] and we need to have 2 separate config files and specify the paths of these 2 files by setting "KRB5_CONFIG" environment variable.
Example value for multiple file specification --> KRB5_CONFIG =/etc/default_krb5.conf:/etc/hadoop_krb5.conf
[reference link https://web.mit.edu/kerberos/krb5-devel/doc/admin/env_variables.html
]
Questions
To point Hadoop Connect app to pick up the custom file configuration, can you please help share the list of files and the property name that needs to be changed.
Would be great if you can share some suggestions.
Thanks!
Splunk Version : 6.5.2
Cloudera Enterprise 5.10.1 (hadoop-2.6.0)
Kerberos Secured
Hadoop Connect App : 1.2.5
With hadoop connect all kerberos flags must be in the files clusters.conf and core-site.xml
When you create a new connection from the UI, Splunk generates these two files.
http://docs.splunk.com/Documentation/HadoopConnect/1.2.5/DeployHadoopConnect/Configurationfilerefere...