All Apps and Add-ons

Ingesting HAProxy stats in to Splunk

sunka55
New Member

All,

I do not see any mention of HAProxy stats in the add on documentation? I see we can collect great performance stats by enabling from haproxy.

ref: https://www.datadoghq.com/blog/how-to-collect-haproxy-metrics/#stats-page

Anybody can throw some light on whats the right way of doing this?

Thanks,
Sam

0 Karma

mghocke
Path Finder

The way I just recently did that was to run a cronjob every 10 minutes that pulls the metrics from haproxy via http in CSV format:

curl -s -u <username:password> http://localhost:<stats port>/stats;csv

It needs a bit of post processing so you can use the regular CSV data ingest by a universal forwarder. Here is the script I am using:

TS=`date +"%Y%m%d%H%M%S"`
DIR=<monitored directory>
FILE=$DIR/haproxy-metrics.$TS.csv

[ -d $DIR ] || mkdir -p $DIR
/usr/bin/curl -s -u <credentials> http://localhost:<port>/stats\;csv \
  | sed -e 's/^# /# time,/' -e 's/^\([^#]\)/'$TS',\1/' -e 's/# //' > $FILE

 The sed command makes sure that a time column is inserted in front with the current time and the '#' is removed from the header.

On the forwarder you can configure the inputs.conf to monitor that directory and give it the sourcetype csv or you can roll your own according to this doc page. Splunk will take care of the rest. Make sure you clean out that metrics directory once in a while so they don't collect there forever.

0 Karma
Get Updates on the Splunk Community!

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...

New in Observability Cloud - Explicit Bucket Histograms

Splunk introduces native support for histograms as a metric data type within Observability Cloud with Explicit ...