We are running a Kubernetes cluster and are shipping pod logs to Splunk Cloud.
Our current setup:
1. Universal forwarders are deployed as Daemonset, which monitors /var/log/containers/*
. Sample from inputs.conf
:
[monitor:///var/log/containers/kube*.log]
host_regex = /var/log/containers/(.*)_.*_.*\.log
sourcetype = kube-state
index = kepler-dev
2. The Universal forwarders send data to heavy forwards which applies some regex
3. Heavy forwarders send data to Splunk Cloud for indexing
I want to send the hostname of the worker node either as part of the host or a separate field. Is there a way I can achieve this?
I would highly suggest looking for the alternatives for forwarding Kubernetes logs to Splunk: