- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Ingesting application logs from Docker container into Splunk Cloud
Hi everyone,
I'm seeking advice on the best way to send application logs from our client's Docker containers into a Splunk Cloud instance, and I’d appreciate your input and experiences.
Currently, my leading approach involves using Docker’s "Splunk logging driver" to forward data via the HEC. However, my understanding is that this method primarily sends container-level data rather than detailed application logs.
Another method I came across involves deploying Splunk's Docker image to create a standalone Enterprise container alongside the Universal Forwarder. The idea here is to set up monitors in the forwarder's inputs.conf to send data to the Enterprise instance and then route it via a Heavy Forwarder to Splunk Cloud.
Has anyone successfully implemented either of these approaches—or perhaps a different method—to ingest application logs from Docker containers into Splunk Cloud? Any insights, tips, or shared experiences would be greatly appreciated.
Thanks in advance for your help!
Cheers,
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

Hi @tawm_12 ,
The simplest method is often configuring your applications within the containers to log to stdout/stderr and then using the Docker Splunk logging driver to forward these logs directly to your Splunk Cloud HEC endpoint.
If your applications must log to files within the container filesystem, you can use a Universal Forwarder (UF) sidecar container.
Method 1: Docker Logging Driver (Recommended if apps log to stdout/stderr)
- Configure your application inside the Docker container to write its logs to standard output (stdout) and standard error (stderr). This is a common practice for containerized applications.
- Configure the Docker daemon or individual containers to use the splunk logging driver, pointing it to your Splunk Cloud HEC endpoint and token.
Example docker run command:
docker run \ --log-driver=splunk \ --log-opt splunk-token= \ --log-opt splunk-url=https://:8088 \ --log-opt splunk-format=json \ --log-opt splunk-verify-connection=false \ # Add other options like splunk-sourcetype, splunk-index, tag, etc. your-application-image
This method leverages Docker's built-in logging capabilities. The driver captures the container's stdout/stderr streams (which contain your application logs if configured correctly) and forwards them via HEC.
Method 2: Universal Forwarder Sidecar (If apps log to files)
- Deploy a Splunk Universal Forwarder container alongside your application container.
- Mount the volume containing the application log files into both the application container (for writing) and the UF container (for reading).
- Configure the UF container's inputs.conf to monitor the log files within the mounted volume.
- Configure the UF container's outputs.conf to forward data to your Splunk Cloud HEC endpoint or an intermediate Heavy Forwarder. Using HEC output from the UF is generally preferred for Splunk Cloud.
Example UF inputs.conf:
[monitor:///path/to/mounted/logs/app.log] sourcetype = your_app_sourcetype index = your_app_index disabled = false
Example UF outputs.conf (for HEC):
[httpout] uri = https://:8088 hecToken = # Consider sslVerifyServerCert = true in production after cert setup sslVerifyServerCert = false useACK = true [tcpout:splunk_cloud_forwarder] server = : # Use if forwarding via UF->HF->Splunk Cloud S2S # Other S2S settings... # disabled = true # Disable if using httpout
The UF actively monitors the specified log files and forwards new events. This is suitable when applications cannot log to stdout/stderr. The UF sidecar runs in parallel with your app container, sharing the log volume.
The Docker logging driver does* send application logs if the application logs are directed to the container's stdout/stderr.
- The approach involving a separate Splunk Enterprise container solely for forwarding is overly complex and not typically recommended. A UF can forward directly or via a standard Heavy Forwarder infrastructure.
- If you are running in Kubernetes, consider using Splunk Connect for Kubernetes, which streamlines log collection using the OpenTelemetry Collector.
- Using HEC for sending data to Splunk Cloud. See Splunk Lantern: Getting Data In - Best Practices for Getting Data into Splunk
🌟 Did this answer help you? If so, please consider:
- Adding kudos to show it was useful
- Marking it as the solution if it resolved your issue
- Commenting if you need any clarification
Your feedback encourages the volunteers in this community to continue contributing
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
We recently completed an integration for one of our customers using the following links:
