Need your help,
Can you please tell us, how to receive Kubernetes STDOUT data in Splunk Enterprise? Kubernetes is running on CoreOS.
Thank you,
We used Fluentd with Splunk cloud and it worked seamlessly.
If anyone using Splunk Cloud sees this answer - the methods above are applicable both to the Enterprise version as well as the Cloud.
Hey dhavamanis,
We have released Splunk Connect for Kubernetes!
It used fluentd and heapster to get you logs metrics and metadata, and is Splunk built and supported!
Check it out!
https://github.com/splunk/splunk-connect-for-kubernetes
is there away to trim colors from fluentd-hec similar to what is suggested in https://github.com/mattheworiordan/fluent-plugin-color-stripper on a pod level ?
We just published first version of our application "Monitoring Kubernetes" (https://splunkbase.splunk.com/app/3743/) and collector (https://www.outcoldsolutions.com). Please take a look on our manual how to get started https://www.outcoldsolutions.com/docs/monitoring-kubernetes/
Hi Dhavamanis,
Fluentd is one of the preferred logging layers of Kubernetes, and using Fluentd is preferred for Kubernetes data routing to Splunk, Elasticsearch, Kafka, Amazon S3, etc. Using a Kubernetes Daemon Set you can deploy a Fluentd node inside of every Kubernetes Node and have the configuration to then route stdout, stderr, etc. data into Elasticsearch, Splunk, etc. Additionally, Fluentd has additional capabilities to append information about the Kubernetes Pod, Namespace, Node.
Documentation of Kubernetes Daemon Set: https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/
Documentation on Fluentd Daemon Set:http://docs.fluentd.org/v0.12/articles/kubernetes-fluentd
Documentation on Fluentd-Elasticsearch Daemon Set:http://docs.fluentd.org/v0.12/articles/kubernetes-fluentd#logging-to-elasticsearch
If you want a supported Splunk plugin and a Kubernetes -> Splunk DaemonSet , Fluentd Enterprise offers SLA support for sending data to Splunk Enterprise and Splunk Cloud. If you want more information you can email me at A@ Treasuredata.com and find more information here: https://fluentd.treasuredata.com
Thanks,
Anurag
what path did you end up on?
Hi dhavamanis,
This is not a Splunk problem, but a Kubernetes problem ..... nevertheless a quick google search revealed this:
When a cluster is created, the standard output and standard error output of each container can be ingested using a Fluentd agent running on each node into either Google Cloud Logging or into Elasticsearch and viewed with Kibana.
From here https://github.com/kubernetes/kubernetes/blob/master/docs/getting-started-guides/logging.md
If you can get into ES/Kibana you can get it into Splunk 😉
Hope this helps and no I have no idea what Kubernetes
is an cannot be of further help 🙂
cheers, MuS
What's the latest way to forward the K8s application (at containers in Pods) level logs to Splunk?
I want to understand, how the Pull-based method for data fetching from Google K8s cluster - container level can be configured for Splunk?