Splunk Enterprise

Permissions issue in Docker container

thoyt
Engager

When splunk starts it seems to try and chown the config files (ie. web.conf) to whatever user splunk is currently running as. This causes an issue with Kubernetes deployments.

When you mount configuration files through a ConfigMap it mounts the volumes as read only owned by root. This would still allow non-root processes to read. However when splunk tries to start the chown fails, causing the container to fail as well.

Is there a flag to disable chown-ing config files on start, or is this something than can be put in a change request and removed form startup all together?

Labels (2)

jhomerlopez
Explorer

I encounter this as well when running splunkforwarder on kubernetes cluster as daemonset. This was solved by mounting the volume to /opt/splunkforwarder-etc/ instead of /opt/splunkforwarder. It seems that all the local/custom configuration should be implemented on /opt/splunkforwarder-etc/

0 Karma

codebuilder
Influencer

See my response on this thread:

----
An upvote would be appreciated and Accept Solution if it helps!
0 Karma
Get Updates on the Splunk Community!

Announcing Scheduled Export GA for Dashboard Studio

We're excited to announce the general availability of Scheduled Export for Dashboard Studio. Starting in ...

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics GA in US-AWS!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...