- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Permissions issue in Docker container

When splunk starts it seems to try and chown the config files (ie. web.conf) to whatever user splunk is currently running as. This causes an issue with Kubernetes deployments.
When you mount configuration files through a ConfigMap it mounts the volumes as read only owned by root. This would still allow non-root processes to read. However when splunk tries to start the chown fails, causing the container to fail as well.
Is there a flag to disable chown-ing config files on start, or is this something than can be put in a change request and removed form startup all together?
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I encounter this as well when running splunkforwarder on kubernetes cluster as daemonset. This was solved by mounting the volume to /opt/splunkforwarder-etc/ instead of /opt/splunkforwarder. It seems that all the local/custom configuration should be implemented on /opt/splunkforwarder-etc/
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
See my response on this thread:
An upvote would be appreciated and Accept Solution if it helps!
