Hi all,
I’m getting strange results when splunking container logs collected by splunk connect for k8s…
when searching for the pod logs with normal SPL:
index=kubernetes earliest=08/24/2020:17:00:00 latest=08/24/2020:17:45:00 pod=nextcloud-dev-84ff5f7dfb-jj2gz | stats count
result=0
when forcing running the stats on shd:
index=kubernetes earliest=08/24/2020:17:00:00 latest=08/24/2020:17:45:00 | noop | search pod=nextcloud-dev-84ff5f7dfb-jj2gz | stats count
result=131 - looks correct
when running tstats:
| tstats count where index=kubernetes pod=nextcloud-dev-84ff5f7dfb-jj2gz earliest=08/24/2020:17:00:00 latest=08/24/2020:17:45:00
result=131 - looks correct…
What may be the issue for the "normal" search noch working for the user?
Running Splunk Enterprise v8.0.5.
best regards,
Andreas
hi all,
As pod is an indexed field you have to configure fields.conf or search bei pod::mypod..
it's stated in the documentation:
"Splunk Connect for Kubernetes sends events to Splunk which can contain extra meta-data attached to each event. Metadata values such as "pod", "namespace", "container_name","container_id", "cluster_name" will appear as fields when viewing the event data inside Splunk. There are two solutions for running searches in Splunk on meta-data.
"
fields.conf example:
https://github.com/splunk/splunk-connect-for-kubernetes/blob/develop/fields.conf.example
sorry,
Andreas