Splunk AppDynamics

Kafka-JMX metrics are not visible in Signalfx.

aashoksi_cisco
Splunk Employee
Splunk Employee

Hi,

I have used kafka-jmx metrics receiver in splunk-otel-collector and now getting the logs in splunk otel colllector that shows the metrics(kafkaJmx+JVM) are exported successfully to signalfx but those metrics are not getting visible in Signalfx. In the Signalfx chart, getting 0 timeseries for all JMX metrics. To verify the same checked in Signalfx usage-analytics where all these metrics are showing but not able to get any data in chart itself. Please find the below logs where its showing some of metrics. In the log, there is one metrics that are not a jmx metrics "queueSize" this metrics we are getting but other are not being fetched.
I would appreciate if anyone suggest or put inout to resolve these issue.
As per log, its clear that Splunk-otel-collectoer is doing its job correctly but not getting in Signalfx due to some unknown issue at signalfx side.



Logs:
---------------------------------------------------------------------------------------------------
ScopeMetrics #0
ScopeMetrics SchemaURL:
InstrumentationScope io.opentelemetry.sdk.logs
Metric #0
Descriptor:
-> Name: queueSize
-> Description: The number of items queued
-> Unit: 1
-> DataType: Gauge
NumberDataPoints #0
Data point attributes:
-> processorType: Str(BatchLogRecordProcessor)
StartTimestamp: 2025-12-30 18:12:56.077595 +0000 UTC
Timestamp: 2026-01-01 18:35:56.209216 +0000 UTC
Value: 0
ScopeMetrics #1
ScopeMetrics SchemaURL:
InstrumentationScope io.opentelemetry.contrib.jmxmetrics 1.48.0-alpha
Metric #0
Descriptor:
-> Name: kafka.request.time.avg
-> Description: The average time the broker has taken to service requests
-> Unit: ms
-> DataType: Gauge
NumberDataPoints #0
Data point attributes:
-> type: Str(produce)
StartTimestamp: 2025-12-30 18:12:56.077595 +0000 UTC
Timestamp: 2026-01-01 18:35:56.209216 +0000 UTC
Value: 0.000000
Metric #1
Descriptor:
-> Name: jvm.memory.pool.init
-> Description: current memory pool usage
-> Unit: By
-> DataType: Gauge
NumberDataPoints #0
Data point attributes:
-> name: Str(CodeHeap 'non-profiled nmethods')
StartTimestamp: 2025-12-30 18:12:56.077595 +0000 UTC
Timestamp: 2026-01-01 18:35:56.209216 +0000 UTC
Value: 2555904
Metric #4
Descriptor:
-> Name: kafka.max.lag
-> Description: Max lag in messages between follower and leader replicas
-> Unit: {message}
-> DataType: Gauge
NumberDataPoints #0
StartTimestamp: 2025-12-30 18:12:56.077595 +0000 UTC
Timestamp: 2026-01-01 18:35:56.209216 +0000 UTC
Value: 0
Metric #5
Descriptor:
-> Name: kafka.partition.under_replicated
-> Description: The number of under replicated partitions
-> Unit: {partition}
-> DataType: Gauge
NumberDataPoints #0
StartTimestamp: 2025-12-30 18:12:56.077595 +0000 UTC
Timestamp: 2026-01-01 18:35:56.209216 +0000 UTC
Value: 0
Metric #6
Descriptor:
-> Name: kafka.request.time.50p
-> Description: The 50th percentile time the broker has taken to service requests
-> Unit: ms
-> DataType: Gauge
NumberDataPoints #0
Data point attributes:
-> type: Str(produce)
StartTimestamp: 2025-12-30 18:12:56.077595 +0000 UTC
Timestamp: 2026-01-01 18:35:56.209216 +0000 UTC
Value: 0.000000
NumberDataPoints #1
Data point attributes:
-> type: Str(fetchconsumer)
StartTimestamp: 2025-12-30 18:12:56.077595 +0000 UTC
Timestamp: 2026-01-01 18:35:56.209216 +0000 UTC
Value: 0.000000
NumberDataPoints #2
Data point attributes:
-> type: Str(fetchfollower)
StartTimestamp: 2025-12-30 18:12:56.077595 +0000 UTC
Timestamp: 2026-01-01 18:35:56.209216 +0000 UTC
Value: 500.000000
Metric #7
Descriptor:
-> Name: kafka.purgatory.size
-> Description: The number of requests waiting in purgatory
-> Unit: {request}
-> DataType: Gauge
NumberDataPoints #0
Data point attributes:
-> type: Str(fetch)
StartTimestamp: 2025-12-30 18:12:56.077595 +0000 UTC
Timestamp: 2026-01-01 18:35:56.209216 +0000 UTC
Value: 6135

Labels (1)
0 Karma
Get Updates on the Splunk Community!

Simplifying the Analyst Experience with Finding-based Detections

    Splunk invites you to an engaging Tech Talk focused on streamlining security operations with ...

[Puzzles] Solve, Learn, Repeat: Word Search

This challenge was first posted on Slack #puzzles channelThis puzzle is based on a letter grid containing ...

[Puzzles] Solve, Learn, Repeat: Advent of Code - Day 4

Advent of CodeIn order to participate in these challenges, you will need to register with the Advent of Code ...