There appears to be a problem with the TS-sos addon when running in a Clustered indexer environment.
I see this error on all of my indexers:
03-21-2013 22:01:23.739 +0000 ERROR ExecProcessor - message from "/opt/splunk/etc/slave-apps/TA-sos/bin/ps_sos.sh" /bin/sh: /opt/splunk/etc/slave-apps/TA-sos/bin/ps_sos.sh: Permission denied
After doing some hunting, the permissions are right on the $Clustermaster/etc/master-apps/TA-sos/bin:
[splunk]:stmocprvsh1:/opt/splunk_clustermaster/etc/master-apps/TA-sos/default$ ls -l ../bin
total 12
-r-xr-xr-x 1 splunk splunk 2515 Jul 11 2012 common.sh
-r-xr-xr-x 1 splunk splunk 1445 Sep 19 2012 lsof_sos.sh
-r-xr-xr-x 1 splunk splunk 2075 Oct 4 05:38 ps_sos.sh
But when the bundle is created and pushed out the the Clustered indexers, permissions get changed:
[splunk@stmocprvidx3 hot]$ ls -l /opt/splunk/etc/slave-apps/TA-sos/bin/
total 12
-rw------- 1 splunk splunk 2515 Mar 21 21:23 common.sh
-rw------- 1 splunk splunk 1445 Mar 21 21:23 lsof_sos.sh
-rw------- 1 splunk splunk 2075 Mar 21 21:23 ps_sos.sh
I can seem to find no way to tell the Clustermaster not to change the permissions on files under the master-apps/ directory.
Anyone else see this?
-dave
UPDATE: This will be fixed in maintenance release 5.0.4.
This has been reproduced in-house and identified as core Splunk bug SPL-64308. I'll update this answer once I have more details regarding the release in which this will be fixed.
Not sure if this bug resurrected or something. I have just noticed the same message in my splunkd.log. My TA-sos app is installed on this search head (a Linux box) by a deployment server (which is a Windows box), both running Splunk Enterprise version 6.4.2.
I did a "chmod ug+x" on the shell scripts in TA-sos, the messages stopped.
Permissions more than ownership is probably the issue here. Is your script set to be executable by the "splunk" user?
I'm running clustered indexers on 6.0 and I'm seeing this same problem. Not sure what to do
05-02-2014 13:39:15.031 -0400 ERROR ExecProcessor - message from "/opt/splunkforwarder/etc/apps/anpl/bin/timemod.pl" /bin/sh: /opt/splunkforwarder/etc/apps/anpl/bin/timemod.pl: Permission denied
I tried to chown the script to splunk:splunk, where it was root:root before. Still getting the error.
UPDATE: This will be fixed in maintenance release 5.0.4.
This has been reproduced in-house and identified as core Splunk bug SPL-64308. I'll update this answer once I have more details regarding the release in which this will be fixed.
On customer engagement, running 6.3.3 and experiencing the exact same issue. Verified permissions for both the ../etc/master-apps version and ../etc/shcluster/apps version but when they get pushed to cluster, they lose the execute flag.
Are you certain that the scripted input files in the Cluster Master's master-apps directory have the right permissions to begin with?
I am on release 6.0 and I am still seeing this error. Any Answers?
If the fix for this issue doesn't make it into our next maintenance release (5.0.3) it is very likely that a patch will follow to resolve this particular problem.
Any word on when a fix for this will be coming around?
I am having the same problems. This distribution method don't keep original permisions like Deployment Server does.
Thanks for reporting this issue. We'll take a look and attempt to reproduce it in-house.