Monitoring Splunk

Need to drop container metrics from specific namespaces using splunk otel collector Kubernetes cluster receiver

vyomsap
New Member

We are monitoring a GKE cluster with Splunk OTEL collector Kubernetes cluster receiver pod, and want to skip or exclude all the metrics from specific namespaces, such that those are dropped and not sent to the Splunk endpoint.

Our goal is to reduce the container count in Splunk Observability Cloud, as we don't need even one metric of containers in those namespaces.

 

Labels (1)
0 Karma

livehybrid
SplunkTrust
SplunkTrust

Hi @vyomsap 

Check out https://help.splunk.com/en/splunk-observability-cloud/manage-data/splunk-distribution-of-the-opentel... which has some examples on this, specifically something like this:

agent:
  config:
    processors:
      # Exclude all telemetry data (metrics, logs, traces) from a namespace named 'namespaceX'
      filter/exclude_all_telemetry_data_from_namespace:
        logs:
          exclude:
            match_type: regexp
            resource_attributes:
              - key: k8s.namespace.name
                value: '^(namespaceX)$'
        metrics:
          exclude:
            match_type: regexp
            resource_attributes:
              - key: k8s.namespace.name
                value: '^(namespaceX)$'

 

Then add it to the processors:

service:
  pipelines:
    logs:
      processors:
...
        - filter/exclude_all_telemetry_data_from_namespace

🌟 Did this answer help you? If so, please consider:

  • Adding karma to show it was useful
  • Marking it as the solution if it resolved your issue
  • Commenting if you need any clarification

Your feedback encourages the volunteers in this community to continue contributing

sabaork
Engager

Hi @livehybrid , thanks for a quick answer, we have already added a filter like this in our helm configuration, but my question is does these filters also affect cluster-receiver deployment? We tried applying filters and referencing them in service-pipelines just like you mentioned, but we still see container metrics coming from different kubernetes namespaces other than what we specified, we don't understand how these metrics are collected are they coming from cluster-receiver pod or daemonset? 

0 Karma
Got questions? Get answers!

Join the Splunk Community Slack to learn, troubleshoot, and make connections with fellow Splunk practitioners in real time!

Meet up IRL or virtually!

Join Splunk User Groups to connect and learn in-person by region or remotely by topic or industry.

Get Updates on the Splunk Community!

Why Splunk Customers Should Attend Cisco Live 2026 Las Vegas

Why Splunk Customers Should Attend Cisco Live 2026 Las Vegas     Cisco Live 2026 is almost here, and this ...

What Is the Name of the USB Key Inserted by Bob Smith? (BOTS Hint, Not the Answer)

Hello Splunkers,   So you searched, “what is the name of the usb key inserted by bob smith?”  Not gonna lie… ...

Automating Threat Operations and Threat Hunting with Recorded Future

    Automating Threat Operations and Threat Hunting with Recorded Future June 29, 2026 | Register   Is your ...