Deployment Architecture

Distributed Search Problems.

OldManEd
Builder

I have 2 Splunk instances living on Linux systems that I’ve inherited, and the one Search Head is throwing the following messages;

[subsearch]: [<indexer server name>] Failed to create a bundles setup with server name '<search head name>'. Using peer's local bundles to execute the search, results might not be correct

We are using a distributed search set up so I went to the index server and did a restart. In the splunkd.log file I saw the following error;

02-26-2014 14:50:03.723 +0000 ERROR ConfObjectManagerDB - Cannot initialize: /mnt//apps/SplunkDeploymentMonitor/metadata/local.meta: Permission denied

02-26-2014 14:50:03.728 +0000 ERROR ConfObjectManagerDB - Cannot initialize: /mnt//apps/Splunk_for_Exchange/metadata/local.meta: Permission denied

02-26-2014 14:50:03.732 +0000 ERROR ConfObjectManagerDB - Cannot initialize: /mnt//apps/search/metadata/local.meta: Permission denied

02-26-2014 14:50:03.737 +0000 ERROR ConfObjectManagerDB - Cannot initialize: /mnt//apps/user-prefs/metadata/local.meta: Permission denied

02-26-2014 14:50:03.860 +0000 ERROR ConfObjectManagerDB - Cannot initialize: /mnt//system/metadata/local.meta: Permission denied

02-26-2014 14:50:03.869 +0000 ERROR ConfObjectManagerDB - Cannot initialize: /mnt//apps/learned/metadata/local.meta: Permission denied

02-26-2014 14:50:03.874 +0000 WARN ConfPathMapper - Failed to open: /mnt//apps/SplunkDeploymentMonitor/local/app.conf: Permission denied

When I went to the directory in question on the indexer and saw that there were a bunch of files all owned by 1010:1010. After a quick look at the /etc/passwd file on the indexer I noticed that there were no UIDs associated with the 1010 ID. I then went back to the search head and saw that indeed, splunk has an ID of 1010.

So, I see what the problem is but how do I address it? Who is supposed to own this directory? If I manually change the ownership of the directory and associated subdirectories, will Splunk override them with 1010:1010 again?

mark_anderson
Engager

I had the same issue, after server patching broke the DB connector (jre path issue). Splunk was restarted by the root user, and all worked fine. Some time later, splunk was restarted and failed. A range of files were modified as owner:group root:root. I found this article - http://answers.splunk.com/answers/171236/fail-to-restart-splunk-after-installing-db-collect.html which described the issue and the fix ( chown of /opt/splunk back to splunk:splunk)

0 Karma

kristian_kolb
Ultra Champion

Splunk normally works fine when the splunk user (often 'splunk') owns everything under /opt/splunk.

Strange things (mainly permissions issues) can happen when a splunk instance that normally runs under a restricted account, such as 'splunk', has been started as 'root' and then after a while, is restarted under the correct user.

In the meantime quite a few files have been altered or created, and the 'splunk' account has no (or limitied) access.

You could try to change ownership, but you should of course find out first which user splunk should run as.

/K

0 Karma
Get Updates on the Splunk Community!

What's new in Splunk Cloud Platform 9.1.2312?

Hi Splunky people! We are excited to share the newest updates in Splunk Cloud Platform 9.1.2312! Analysts can ...

What’s New in Splunk Security Essentials 3.8.0?

Splunk Security Essentials (SSE) is an app that can amplify the power of your existing Splunk Cloud Platform, ...

Let’s Get You Certified – Vegas-Style at .conf24

Are you ready to level up your Splunk game? Then, let’s get you certified live at .conf24 – our annual user ...