We are going to have several OpenShift 4 clusters running CoreOS, therefore without having the possibility to install a standard version of the Splunk Universal Forwarder.
Do you think is possible, installing a containerized version of the Splunk Universal Forwarder on each OpenShift node, to let it connect to a Splunk Deployment server (not containerized) in order to control them centrally?
In general, it should be possible as long as you keep the configuration and data persistent thus defeating the whole purpose of containerization. But yes, if you define volumes for /opt/splunk/etc and /opt/splunk/var, you should be ready to go - the containerized setup should work the same as normal splunk installation. The only caveat being - I'm not sure how it will react to version upgrade in the repository if you'll specify that the latest version should be run.
Oh, and of course since the container is run in its own namespace you need to take care to pass appropriate ports to the container if you want to listen on them for network events and bits of the host's filesystem if you want to monitor files.
Which in general makes it a bit pointless (as is the case with the whole idea of containerized splunk IMO but that's me).
But if you have no other option of running anything "directly" on the host, that might work.