Monitoring Splunk

Need to drop container metrics from specific namespaces using splunk otel collector Kubernetes cluster receiver

vyomsap
New Member

We are monitoring a GKE cluster with Splunk OTEL collector Kubernetes cluster receiver pod, and want to skip or exclude all the metrics from specific namespaces, such that those are dropped and not sent to the Splunk endpoint.

Our goal is to reduce the container count in Splunk Observability Cloud, as we don't need even one metric of containers in those namespaces.

 

Labels (1)
0 Karma

livehybrid
SplunkTrust
SplunkTrust

Hi @vyomsap 

Check out https://help.splunk.com/en/splunk-observability-cloud/manage-data/splunk-distribution-of-the-opentel... which has some examples on this, specifically something like this:

agent:
  config:
    processors:
      # Exclude all telemetry data (metrics, logs, traces) from a namespace named 'namespaceX'
      filter/exclude_all_telemetry_data_from_namespace:
        logs:
          exclude:
            match_type: regexp
            resource_attributes:
              - key: k8s.namespace.name
                value: '^(namespaceX)$'
        metrics:
          exclude:
            match_type: regexp
            resource_attributes:
              - key: k8s.namespace.name
                value: '^(namespaceX)$'

 

Then add it to the processors:

service:
  pipelines:
    logs:
      processors:
...
        - filter/exclude_all_telemetry_data_from_namespace

🌟 Did this answer help you? If so, please consider:

  • Adding karma to show it was useful
  • Marking it as the solution if it resolved your issue
  • Commenting if you need any clarification

Your feedback encourages the volunteers in this community to continue contributing

sabaork
Engager

Hi @livehybrid , thanks for a quick answer, we have already added a filter like this in our helm configuration, but my question is does these filters also affect cluster-receiver deployment? We tried applying filters and referencing them in service-pipelines just like you mentioned, but we still see container metrics coming from different kubernetes namespaces other than what we specified, we don't understand how these metrics are collected are they coming from cluster-receiver pod or daemonset? 

0 Karma
Get Updates on the Splunk Community!

Unlock Database Monitoring with Splunk Observability Cloud

In today’s fast-paced digital landscape, even minor database slowdowns can disrupt user experiences and stall ...

Print, Leak, Repeat: UEBA Insider Threats You Can't Ignore

Are you ready to uncover the threats hiding in plain sight? Join us for "Print, Leak, Repeat: UEBA Insider ...

Splunk MCP & Agentic AI: Machine Data Without Limits

  Discover how the Splunk Model Context Protocol (MCP) Server can revolutionize the way your organization ...