Monitoring Splunk

Need to drop container metrics from specific namespaces using splunk otel collector Kubernetes cluster receiver

vyomsap
New Member

We are monitoring a GKE cluster with Splunk OTEL collector Kubernetes cluster receiver pod, and want to skip or exclude all the metrics from specific namespaces, such that those are dropped and not sent to the Splunk endpoint.

Our goal is to reduce the container count in Splunk Observability Cloud, as we don't need even one metric of containers in those namespaces.

 

Labels (1)
0 Karma

livehybrid
SplunkTrust
SplunkTrust

Hi @vyomsap 

Check out https://help.splunk.com/en/splunk-observability-cloud/manage-data/splunk-distribution-of-the-opentel... which has some examples on this, specifically something like this:

agent:
  config:
    processors:
      # Exclude all telemetry data (metrics, logs, traces) from a namespace named 'namespaceX'
      filter/exclude_all_telemetry_data_from_namespace:
        logs:
          exclude:
            match_type: regexp
            resource_attributes:
              - key: k8s.namespace.name
                value: '^(namespaceX)$'
        metrics:
          exclude:
            match_type: regexp
            resource_attributes:
              - key: k8s.namespace.name
                value: '^(namespaceX)$'

 

Then add it to the processors:

service:
  pipelines:
    logs:
      processors:
...
        - filter/exclude_all_telemetry_data_from_namespace

🌟 Did this answer help you? If so, please consider:

  • Adding karma to show it was useful
  • Marking it as the solution if it resolved your issue
  • Commenting if you need any clarification

Your feedback encourages the volunteers in this community to continue contributing

sabaork
Engager

Hi @livehybrid , thanks for a quick answer, we have already added a filter like this in our helm configuration, but my question is does these filters also affect cluster-receiver deployment? We tried applying filters and referencing them in service-pipelines just like you mentioned, but we still see container metrics coming from different kubernetes namespaces other than what we specified, we don't understand how these metrics are collected are they coming from cluster-receiver pod or daemonset? 

0 Karma
Get Updates on the Splunk Community!

Splunk Observability for AI

Don’t miss out on an exciting Tech Talk on Splunk Observability for AI!Discover how Splunk’s agentic AI ...

Splunk Enterprise Security 8.x: The Essential Upgrade for Threat Detection, ...

Watch On Demand the Tech Talk, and empower your SOC to reach new heights! Duration: 1 hour  Prepare to ...

Splunk Observability as Code: From Zero to Dashboard

For the details on what Self-Service Observability and Observability as Code is, we have some awesome content ...