Monitoring Splunk

Need to drop container metrics from specific namespaces using splunk otel collector Kubernetes cluster receiver

vyomsap
New Member

We are monitoring a GKE cluster with Splunk OTEL collector Kubernetes cluster receiver pod, and want to skip or exclude all the metrics from specific namespaces, such that those are dropped and not sent to the Splunk endpoint.

Our goal is to reduce the container count in Splunk Observability Cloud, as we don't need even one metric of containers in those namespaces.

 

Labels (1)
0 Karma

livehybrid
SplunkTrust
SplunkTrust

Hi @vyomsap 

Check out https://help.splunk.com/en/splunk-observability-cloud/manage-data/splunk-distribution-of-the-opentel... which has some examples on this, specifically something like this:

agent:
  config:
    processors:
      # Exclude all telemetry data (metrics, logs, traces) from a namespace named 'namespaceX'
      filter/exclude_all_telemetry_data_from_namespace:
        logs:
          exclude:
            match_type: regexp
            resource_attributes:
              - key: k8s.namespace.name
                value: '^(namespaceX)$'
        metrics:
          exclude:
            match_type: regexp
            resource_attributes:
              - key: k8s.namespace.name
                value: '^(namespaceX)$'

 

Then add it to the processors:

service:
  pipelines:
    logs:
      processors:
...
        - filter/exclude_all_telemetry_data_from_namespace

🌟 Did this answer help you? If so, please consider:

  • Adding karma to show it was useful
  • Marking it as the solution if it resolved your issue
  • Commenting if you need any clarification

Your feedback encourages the volunteers in this community to continue contributing

sabaork
Engager

Hi @livehybrid , thanks for a quick answer, we have already added a filter like this in our helm configuration, but my question is does these filters also affect cluster-receiver deployment? We tried applying filters and referencing them in service-pipelines just like you mentioned, but we still see container metrics coming from different kubernetes namespaces other than what we specified, we don't understand how these metrics are collected are they coming from cluster-receiver pod or daemonset? 

0 Karma
Get Updates on the Splunk Community!

Data Management Digest – December 2025

Welcome to the December edition of Data Management Digest! As we continue our journey of data innovation, the ...

Index This | What is broken 80% of the time by February?

December 2025 Edition   Hayyy Splunk Education Enthusiasts and the Eternally Curious!    We’re back with this ...

Unlock Faster Time-to-Value on Edge and Ingest Processor with New SPL2 Pipeline ...

Hello Splunk Community,   We're thrilled to share an exciting update that will help you manage your data more ...