Splunk Enterprise

Permissions issue in Docker container

thoyt
Engager

When splunk starts it seems to try and chown the config files (ie. web.conf) to whatever user splunk is currently running as. This causes an issue with Kubernetes deployments.

When you mount configuration files through a ConfigMap it mounts the volumes as read only owned by root. This would still allow non-root processes to read. However when splunk tries to start the chown fails, causing the container to fail as well.

Is there a flag to disable chown-ing config files on start, or is this something than can be put in a change request and removed form startup all together?

Labels (2)

jhomerlopez
Explorer

I encounter this as well when running splunkforwarder on kubernetes cluster as daemonset. This was solved by mounting the volume to /opt/splunkforwarder-etc/ instead of /opt/splunkforwarder. It seems that all the local/custom configuration should be implemented on /opt/splunkforwarder-etc/

0 Karma

codebuilder
Influencer

See my response on this thread:

----
An upvote would be appreciated and Accept Solution if it helps!
0 Karma
Get Updates on the Splunk Community!

Splunk Observability as Code: From Zero to Dashboard

For the details on what Self-Service Observability and Observability as Code is, we have some awesome content ...

[Puzzles] Solve, Learn, Repeat: Character substitutions with Regular Expressions

This challenge was first posted on Slack #puzzles channelFor BORE at .conf23, we had a puzzle question which ...

Shape the Future of Splunk: Join the Product Research Lab!

Join the Splunk Product Research Lab and connect with us in the Slack channel #product-research-lab to get ...