Getting Data In

How can we log and containerize the logs using Kubernetes and Splunk?

paimonsoror
Builder

Hi Folks;

I came across this post on github https://github.com/kubernetes/kubernetes/issues/24677 and it had some fantastic options for pulling data from K8s/Docker into Splunk. It seems that the 'easy' approach here is to leverage the integration of K8S/Redhat with Fluentd, and then push the data into splunk. I was hoping to pick the brain of some of our Splunk experts to see if there is also a way to do a direct to splunk integration. Ideally, our goal is to make sure that the data that comes into splunk is 'containerized' so that it can easily be organized.

I see the docker Splunk logging driver is available, but seems to be the less trusted approach since it doesn't integrate well with K8s.

mattymo
Splunk Employee
Splunk Employee

UPDATE: Splunk Connect for Kubernetes is the current Splunk option as of Jan 2020
https://github.com/splunk/splunk-connect-for-kubernetes

Check out the work we are doing on our Open Source docker-itmonitoring project, including log, metadata and prototype app.

https://github.com/splunk/docker-itmonitoring/tree/7.0.0-k8s

We are playing with any and all integrations, but the good ol UF does some great things as a daemonset! Check out the TAs we have started building and feel free to contribute your experiences as we shape the future of Splunk's docker/k8s support moving forward!

- MattyMo
0 Karma

sayeed101
New Member

@mmodestino - I had a look at Github. Running UFs on K8s or Openshift hosts either normally or as a container means I’d have to run it as root rather than a Splunk user. For Openshift, the /var/log/container directory is a symlink to the /var/lib/docker where the actual logs are stored. Is it advisable to set the ACL on /var/lib/docker with read access to the splunk user? I’ve seen a similar suggestions on other posts where a no root user needs access to log directories owned by root.

0 Karma

mattymo
Splunk Employee
Splunk Employee

Technically, this is easily avoided with daemonsets and letting the container run with privs, but I am looking to confirm with my friends at Red Hat, as I see no reason that some creative linux'ing cant make that happen.

- MattyMo
0 Karma

outcoldman
Communicator

We just published first version of our application "Monitoring Kubernetes" (https://splunkbase.splunk.com/app/3743/) and collector (https://www.outcoldsolutions.com). Please take a look on our manual how to get started https://www.outcoldsolutions.com/docs/monitoring-kubernetes/

kiyototamura
Explorer

You can always use Splunk Universal Forwarder. Just create a k8s daemonset with it. However, this approach has some drawbacks compared to the Fluentd-based approach.

  1. Tight coupling between logging agent and Splunk: Log data has many use cases. Also, you may want to send the logs into other systems like Amazon S3, Google Cloud Storage, etc. For such use cases, Fluentd-based approach is more robust because Fluentd Enterprise can send your container logs into multiple systems with a unified log pipeline.
  2. Cost Management: You may want to pre-process and filter logs you send to Splunk. This is very easy with Fluentd. More over, you can extend and implement custom filters easily to meet with your unique needs/handle unique log formats. You do not lose any log data even if you filter them in Fluentd: simply redirect all raw logs into a much cheaper cold storage like Amazon Glacier.
  3. k8s/Docker compatibility: Fluentd is an official Cloud Native Computing Foundation project, and as such, it collaborates closely with the rest of the container/container orchestration ecosystem to ensure forward compatibility with Docker/k8s.

Full Disclosures: I work at Treasure Data where we offer Fluentd Enterprise, a commercial offering built around Fluentd. If you are interested, check out the website and the Splunk optimization module.

sloshburch
Splunk Employee
Splunk Employee

Thanks for the answer and the honesty/disclosure! Very cool of you.

0 Karma
Get Updates on the Splunk Community!

.conf23 Registration is Now Open!

Time to toss the .conf-etti 🎉 —  .conf23 registration is open!   Join us in Las Vegas July 17-20 for ...

Don't wait! Accept the Mission Possible: Splunk Adoption Challenge Now and Win ...

Attention everyone! We have exciting news to share! We are recruiting new members for the Mission Possible: ...

Unify Your SecOps with Splunk Mission Control

In today’s post, I'm excited to share some recent Splunk Mission Control innovations. With Splunk Mission ...