Hello,
I'm setting up StatsD to send custom metrics from an AWS EC2 instance, where the Splunk OpenTelemetry Collector is running to Splunk Observability Cloud.
I've configured StatsD as a receiver using guidelines from the https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/statsdreceiver. Here's my configuration for StatsD configured in the agent_config.yaml file.
receivers:
statsd:
endpoint: "localhost:8125"
aggregation_interval: 60s
enable_metric_type: false
is_monotonic_counter: false
timer_histogram_mapping:
- statsd_type: "histogram"
observer_type: "histogram"
histogram:
max_size: 50
- statsd_type: "distribution"
observer_type: "histogram"
histogram:
max_size: 50
- statsd_type: "timing"
observer_type: "summary"
The GitHub documentation provides exporter configurations, but I'm unsure how to implement them effectively. As per the github document below is mentioned.
exporters:
file:
path: ./test.json
service:
pipelines:
metrics:
receivers: [statsd]
exporters: [file]
Below is the receivers configuration which I am setting in the service configuration section in the in the agent_config.yaml as mentioned below:
service:
pipelines:
metrics:
receivers: [hostmetrics, otlp, signalfx, statsd]
processors: [memory_limiter, batch, resourcedetection]
exporters: [signalfx]
When I add "statsd" ("receivers: [hostmetrics, otlp, signalfx, statsd]" and "exporters: [signalfx]") as one of the more receivers as mentioned above and restart the "systemctl restart splunk-otel-collector.service", splunk otel collector agent stop sending any metric to the Splunk Observability Cloud and when I remove statsd (receivers: [hostmetrics, otlp, signalfx]) then splunk otel collector agent starts sending any metric to the Splunk Observability Cloud.
What should be correct/supported ad receiver/exporter to be configured in the service section for the statsd?
Thanks
update: We did get this resolved earlier today. The cause was a port conflict as 8125 was already in use. With statsd, this can be tricky to catch because it's UDP--so normal testing methods for TCP ports don't work. We found that 8127 was available and used that to get it working. If anyone else encounters this, be sure to check logs (e.g., /var/log/messages or /var/log/syslog) for port conflict error messages.
update: We did get this resolved earlier today. The cause was a port conflict as 8125 was already in use. With statsd, this can be tricky to catch because it's UDP--so normal testing methods for TCP ports don't work. We found that 8127 was available and used that to get it working. If anyone else encounters this, be sure to check logs (e.g., /var/log/messages or /var/log/syslog) for port conflict error messages.
Hello @bishida,
Thank you for taking the time to look into it and for all your help and support. It's truly appreciated.
Have you checked the logs of the Otel Collector?
Could you please define a separate pipeline for the statsd metrics like:
service:
pipelines:
metrics/statsd:
receivers:
- statsd
exporters:
- signalfx