Splunk Search

Inconsistent results when searching splunk connect for kubernetes logs


Hi all,

I’m getting strange results when splunking container logs collected by splunk connect for k8s…

when searching for the pod logs with normal SPL:




index=kubernetes earliest=08/24/2020:17:00:00 latest=08/24/2020:17:45:00 pod=nextcloud-dev-84ff5f7dfb-jj2gz | stats count





when forcing running the stats on shd:




index=kubernetes earliest=08/24/2020:17:00:00 latest=08/24/2020:17:45:00 | noop | search pod=nextcloud-dev-84ff5f7dfb-jj2gz | stats count




result=131 - looks correct

when running tstats:




| tstats count where index=kubernetes pod=nextcloud-dev-84ff5f7dfb-jj2gz earliest=08/24/2020:17:00:00 latest=08/24/2020:17:45:00




result=131 - looks correct…


What may be the issue for the "normal" search noch working for the user?

Running Splunk Enterprise v8.0.5.


best regards,



Labels (2)
Tags (2)
0 Karma


hi all,

As pod is an indexed field you have to configure fields.conf or search bei pod::mypod.. 

it's stated in the documentation:


"Splunk Connect for Kubernetes sends events to Splunk which can contain extra meta-data attached to each event. Metadata values such as "pod", "namespace", "container_name","container_id", "cluster_name" will appear as fields when viewing the event data inside Splunk. There are two solutions for running searches in Splunk on meta-data.

  • Modify search to usefieldname::value instead of fieldname=value.
  • Configure fields.conf on your downstream Splunk system to have your meta-data fields available to be searched using fieldname=value. Example: fields.conf.example


fields.conf example:





0 Karma