We are monitoring a GKE cluster with Splunk OTEL collector Kubernetes cluster receiver pod, and want to skip or exclude all the metrics from specific namespaces, such that those are dropped and not sent to the Splunk endpoint.
Our goal is to reduce the container count in Splunk Observability Cloud, as we don't need even one metric of containers in those namespaces.
Hi @vyomsap
Check out https://help.splunk.com/en/splunk-observability-cloud/manage-data/splunk-distribution-of-the-opentel... which has some examples on this, specifically something like this:
agent:
config:
processors:
# Exclude all telemetry data (metrics, logs, traces) from a namespace named 'namespaceX'
filter/exclude_all_telemetry_data_from_namespace:
logs:
exclude:
match_type: regexp
resource_attributes:
- key: k8s.namespace.name
value: '^(namespaceX)$'
metrics:
exclude:
match_type: regexp
resource_attributes:
- key: k8s.namespace.name
value: '^(namespaceX)$'
Then add it to the processors:
service:
pipelines:
logs:
processors:
...
- filter/exclude_all_telemetry_data_from_namespace🌟 Did this answer help you? If so, please consider:
Your feedback encourages the volunteers in this community to continue contributing
Hi @livehybrid , thanks for a quick answer, we have already added a filter like this in our helm configuration, but my question is does these filters also affect cluster-receiver deployment? We tried applying filters and referencing them in service-pipelines just like you mentioned, but we still see container metrics coming from different kubernetes namespaces other than what we specified, we don't understand how these metrics are collected are they coming from cluster-receiver pod or daemonset?