Getting Data In

Are there best practices with standardizing Docker/Mesos logs into Splunk?

splunk_zen
Builder

Has anyone had some experiences zookeeping container logs into Splunk?

I'm experiencing logging is not standardized across containers and thus ending with half a dozen logging structures going into:

/var/lib/mesos/slave/slaves/(?<agent>[^/]+)/frameworks/(?<framework>[^/]+)/executors/(?<executor>[^/]+)/runs/(?<run>[^/]+)/{stdout,stderr}

Bumped into logspout which appears to be used to aggregate logs.
Is this the way to go?
Any alternative suggestion?

0 Karma

acharlieh
Influencer

With some slightly inconsistent logs, we got folks to leverage the lesser known HEADER_MODE when it comes to mesos/dcos. We defined a sourcetype in props.conf that defined HEADER_MODE = always then folks just had to modify their loggers to spit out a prefix of \n***SPLUNK*** sourcetype=theirsoucetype index=theirindex\nimmediately prior to their log event.

Splunk would then uniformly apply linebreaking, see the ***SPLUNK*** line, and alter the metadata just in time for line merging and timestamping. See https://wiki.splunk.com/Community:HowIndexingWorks for a break down of the steps as data flows from ingestion to your index. There are some caveats, that all of the logs have to use the same line breaking, and you may consider heavy forwarders or possibly look into the new EVENT_BREAKER stuff, to ensure the header line doesn't get separated from it's corresponding body, but if you're always printing the header immediately before each event this may not be too much of an issue.

splunk_zen
Builder

Very interesting, thanks for that acharlieh
The way I sorted this was identifying that what seemed like random gibberish in the full stderr and stdout paths, actually included executor/app_name and thus I was able to set a 1 to 1 relationship of source wildcard to sourcetype

Still, very helpful insight, thanks again

0 Karma

bmacias84
Champion

We currently use HTTP Event collector but we use a fork of the Docker Splunk Logger written in go. We modified it to be more generic for our needs. We modified the code so it logs straight from our source(s). This is probably not what you wanted to hear, but that's how we got around the issue.

0 Karma

splunk_zen
Builder

The devs had told me mesos replaces docker's logging driver so I had to find another way
Thanks anyway

0 Karma

splunk_zen
Builder

Btw I'm of course aware of the Splunk Docker Driver through the HTTP Event collector
but the devs made pressure to avoid having the driver and just dump the data and having Splunk consuming it

Is my only solution to evangelise them to the driver approach as it's cleaner for Splunk (more work for them) or is there any way to tidy the logs in stdout, stderr prior to Splunk ingestion?

0 Karma

gjanders
SplunkTrust
SplunkTrust

If you can convince them to dump the logs consistently then you can work with it!
If they all intend to dump the logs in their own logging format, then you will end up with many sourcetypes to attempt to deal with their logs...

0 Karma
Get Updates on the Splunk Community!

September Community Champions: A Shoutout to Our Contributors!

As we close the books on another fantastic month, we want to take a moment to celebrate the people who are the ...

Splunk Decoded: Service Maps vs Service Analyzer Tree View vs Flow Maps

It’s Monday morning, and your phone is buzzing with alert escalations – your customer-facing portal is running ...

What’s New in Splunk Observability – September 2025

What's NewWe are excited to announce the latest enhancements to Splunk Observability, designed to help ITOps ...