I'm using the Splunk-developed splunk/splunk:7.3.0
Docker image as the base ( from
) image for my own custom Docker image.
I'm using Splunk Web to develop a Splunk app inside a Docker container based on that custom image.
For previous apps that I've developed in this context, I've used the docker cp
command to copy my app folder out of the container to the Docker host file system. That works, but I've always felt it was inelegant; that there was a better way.
I'm looking for a best-practice method of extracting my app from the Docker container, although I'd settle for least effort.
Recently, I asked a colleague for advice on a more streamlined method. They suggested using a bind mount: mapping a folder in the Docker host file system to the app folder inside the container. I tried that, but immediately ran into a problem: when you edit a dashboard XML file in Splunk Web, Splunk Web sets the file permissions to the Splunk user UID. My user ID on the Docker host couldn't read the updated files.
I can see that the base splunk/splunk
( docker-splunk
) Dockerfile use build-time arguments, via the Dockerfile ARG
command, to set the uid and gid of the Splunk user.
I don't fancy changing the uid of my user ID on the Docker host to match the uid set by that Splunk Dockerfile. I don't like the idea of my uid being "held hostage" by this particular use case.
It occurred to me that I could build my own version of the base Splunk Docker image to set those arguments to match my user ID. Something like this (untested):
docker build --build-arg UID=$UID --build-arg GID=`id -g` ...
(there's no $GID
environment variable)
but then I started looking into building the base Splunk Docker image from source, and got scared off by the docs:
Build from source
While we don’t support or recommend you building your own images from source, it is entirely possible. This can be useful if you want to incorporate very experimental features, test new features, and if you have your own registry for persistent images.
My use case doesn't match any of those. I'm just a dev who wants to get my app folder out. I don't consider that to be experimental.
Those docs also refer to a separate project:
Splunk provisioning capabilities are provided through the utilization of an entrypoint script and playbooks published separately via the splunk-ansible project.
all of which made me think that building the base image from source might be a "rabbit hole" I don't have time for. Too much effort.
I've considered creating a new user ID in the Docker host with a uid that matches the Splunk user uid in the Docker container.
It's also occurred to me to install Git inside the development Docker container, expose the necessary ports, Git-init my app folder, and push to a remote outside of the container, and outside of the Docker host.
For now, however, I'm going to fall back on docker cp
.
Alternative answers and advice welcome. How do you get your in-development app out of a Splunk Docker container?
You should check out @outcoldman and his app boilerplate toolkit.
https://www.outcoldsolutions.com/blog/2018-06-28-splunk-application-boilerplate/
The official docker images definitely skew to running in production at this time, and are more geared toward operating and orchestrating Splunk vs a development kit, though we have talked about looking at a separate dev image as a future opportunity.
You should check out @outcoldman and his app boilerplate toolkit.
https://www.outcoldsolutions.com/blog/2018-06-28-splunk-application-boilerplate/
The official docker images definitely skew to running in production at this time, and are more geared toward operating and orchestrating Splunk vs a development kit, though we have talked about looking at a separate dev image as a future opportunity.
Thanks very much for the pointer and the insight ("official docker images ... skew to running in production"). Both very helpful. A Splunk-supplied dev image designed to smooth out the issues I'm meeting would be nice (in-container Git, maybe?).
I've Git-init'd the "bind-mounted" app folder on the Docker host. git add --all
failed because of the permissions set by Splunk Web; sudo git add -all
worked.
Then I defined a Git remote pointing to a repo in a Bitbucket server (not the Docker host), and pushed.
I'll see how this—a combination of bind mount, sudo-overriding the Splunk-set file permissions, and git push
to a remote—work for me. I was always going to be storing the app in a Git repo anyway, so really the only wrinkle is the sudo on the git add
.
Fugly? Maybe. I'm open to a better way.
Dang. It gets more complicated. Splunk has set the permissions on files inside the .git folder to the Splunk user uid, 41812. I just had to sudo
the git commit
. And git push
without sudo responded:
error: update_ref failed for ref 'refs/remotes/origin/master': cannot lock ref 'refs/remotes/origin/master': Unable to create '/home/grahan01/dev/splunk/apps/cicspa/.git/refs/remotes/origin/master.lock': Permission denied
but then a sudo git push
responded "Everything up-to-date".
Not liking this.
I have sudo privileges on the Docker host, so I could use those privileges to overcome the permissions set by Splunk, but I'm not convinced that's any better than docker cp
.