Getting Data In

Prometheus metric labels in a splunk metric index

joergherzinger
Loves-to-Learn Everything

Hi,

I am trying to collect metrics from various sources with the OTel Collector and send them to our Splunk Enterprise instance via a HEC. Collecting and sending the metrics via OTel seems to work quite fine and I was quickly able to see metrics in my splunk index.

However, what I am completely missing are the labels of those prometheus metrics in Splunk. Here an example of some of the metrics I scrape:

 

# HELP jmx_exporter_build_info A metric with a constant '1' value labeled with the version of the JMX exporter.
# TYPE jmx_exporter_build_info gauge
jmx_exporter_build_info{version="0.20.0",name="jmx_prometheus_javaagent",} 1.0
# HELP jvm_info VM version info
# TYPE jvm_info gauge
jvm_info{runtime="OpenJDK Runtime Environment",vendor="AdoptOpenJDK",version="11.0.8+10",} 1.0
# HELP jmx_config_reload_failure_total Number of times configuration have failed to be reloaded.
# TYPE jmx_config_reload_failure_total counter
jmx_config_reload_failure_total 0.0
# HELP jvm_gc_collection_seconds Time spent in a given JVM garbage collector in seconds.
# TYPE jvm_gc_collection_seconds summary
jvm_gc_collection_seconds_count{gc="G1 Young Generation",} 883.0
jvm_gc_collection_seconds_sum{gc="G1 Young Generation",} 133.293
jvm_gc_collection_seconds_count{gc="G1 Old Generation",} 0.0
jvm_gc_collection_seconds_sum{gc="G1 Old Generation",} 0.0
# HELP jvm_memory_pool_allocated_bytes_total Total bytes allocated in a given JVM memory pool. Only updated after GC, not continuously.
# TYPE jvm_memory_pool_allocated_bytes_total counter
jvm_memory_pool_allocated_bytes_total{pool="CodeHeap 'profiled nmethods'",} 6.76448896E8
jvm_memory_pool_allocated_bytes_total{pool="G1 Old Gen",} 1.345992784E10
jvm_memory_pool_allocated_bytes_total{pool="G1 Eden Space",} 9.062406160384E12
jvm_memory_pool_allocated_bytes_total{pool="CodeHeap 'non-profiled nmethods'",} 3.38238592E8
jvm_memory_pool_allocated_bytes_total{pool="G1 Survivor Space",} 1.6919822336E10
jvm_memory_pool_allocated_bytes_total{pool="Compressed Class Space",} 1.41419488E8
jvm_memory_pool_allocated_bytes_total{pool="Metaspace",} 1.141665096E9
jvm_memory_pool_allocated_bytes_total{pool="CodeHeap 'non-nmethods'",} 3544448.0

 

I do see the values in Splunk, but especially for the last metric "jvm_memory_pool_allocated_bytes_total" the label of which pool is lost in splunk. Is this intentional or am I missing something. The getting started page for metrics also has no information on where those labels are stored and how I could query based on them (https://docs.splunk.com/Documentation/Splunk/latest/Metrics/GetStarted)

 

tia,

    Jörg

Labels (2)
Tags (3)
0 Karma

PaulPanther
Builder

Could you please try set parameter resource_to_telemetry_conversion to true?

exporters:
  prometheus:
    endpoint: "1.2.3.4:1234"
[..] resource_to_telemetry_conversion: enabled: true

opentelemetry-collector-contrib/exporter/prometheusexporter at main · open-telemetry/opentelemetry-c...

0 Karma

PaulPanther
Builder

Could you please share your current otel config with us?

0 Karma

joergherzinger
Loves-to-Learn Everything

This is my current otel config:

 

---
service:
  telemetry:
    logs:
      level: "debug"
    metrics:
      level: detailed
      address: ":8888"
  pipelines:
    metrics:
      receivers:
        - prometheus
      exporters:
        - splunk_hec

receivers:
  prometheus:
    config:
      scrape_configs:
        - job_name: jira_dev
          scrape_interval: 60s
          static_configs:
            - targets: [<hidden>:8060]

exporters:
  debug:
    verbosity: detailed
    sampling_initial: 5
    sampling_thereafter: 200
  splunk_hec:
    token: "<hidden>"
    endpoint: "https://<hidden>:8088/services/collector"
    source: "toolchainotel"
    sourcetype: "toolchain:test:metric"
    index: "onboarding_metric"
    tls:
      insecure_skip_verify: true
0 Karma
Get Updates on the Splunk Community!

Enterprise Security Content Update (ESCU) | New Releases

In November, the Splunk Threat Research Team had one release of new security content via the Enterprise ...

Index This | Divide 100 by half. What do you get?

November 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with this ...

Stay Connected: Your Guide to December Tech Talks, Office Hours, and Webinars!

❄️ Celebrate the season with our December lineup of Community Office Hours, Tech Talks, and Webinars! ...