Monitoring MariaDB and MySQL
- Subscribe to RSS Feed
- Mark as New
- Mark as Read
- Bookmark Topic
- Subscribe
- Printer Friendly Page
- Report Inappropriate Content

In a previous post, we explored monitoring PostgreSQL and general best practices around which metrics to collect and monitor for database performance, health, and reliability. In this post, we’ll look at how to monitor MariaDB (and MySQL) so we can keep the data stores that back our business applications performant, healthy, and resilient.
How to get the metrics
The first step to get metrics from MariaDB or MySQL (for the rest of this post, I’ll just mention MariaDB, but the same approach and receiver works for both) to Splunk Observability Cloud is to instrument the backend service(s) that are connected to your MariaDB(s). If you’re working with the Splunk Distribution of the OpenTelemetry Collector, you can follow the guided install docs. Here I’m using Docker Compose to set up my application, MariaDB service, and the Splunk Distribution of the OpenTelemetry Collector:
Next, we’ll need to configure the OpenTelemetry Collector with a receiver to collect telemetry data from our MariaDB and an exporter to export data to our backend observability platform. If you already have an OpenTelemetry Collector configuration file, you can add the following configurations to that file. Since I set up the Collector using Docker Compose, I needed to create an empty otel-collector-config.yaml file. The mysql receiver is the MariaDB-compatible Collector receiver, so we’ll add it under the receivers block along with our Splunk Observability Cloud exporter under the exporters block. Here’s what our complete configuration looks like:
As always, don’t forget to add the receivers and exporters to the service pipelines.
Done! We can now build, start, or restart our service and see our database metrics flow into our backend observability platform.
Visualizing the Data in Splunk Observability Cloud
With our installation and configuration of the OpenTelemetry Collector complete, we can now visualize our data in our backend observability platform, Splunk Observability Cloud.
From within Application Performance Monitoring (APM), we can view all of our application services and get a comprehensive performance overview – including performance details around our MariaDB:
We can explore our Service Map to visualize our MariaDB instance and how it fits in with the other services in our application:
And we can select our MariaDB instance to get deeper insight into performance metrics, requests, and errors:
If we scroll down in the Breakdown dropdown results on the right side of the screen, we can even get quick visibility into Database Query Performance, showing us how specific queries perform and seeing which queries are being executed:
Clicking into Database Query Performance takes us to a view that can be sorted by total query response time, queries in the 90th percentile of latency, or total requests so we can quickly isolate queries that might be impacting our services and our users:
We can select specific queries from our Top Queries for more query detail, like the full query, requests & errors, latency, and query tags:
We can dig into specific traces related to high-latency database requests, letting us see how specific users were affected by database performance:
And also see right down into the span performance:
And the db.statement for the trace:
We can proactively use this information to further optimize our queries and improve user experience. You can also see that the database activity shows up in the overall trace waterfall, letting you get a full picture of how all components of the stack were involved in this transaction.
But how can this help us in an incident?
This is all helpful information and can guide us on our journeys to improve query performance and efficiency. But when our database connections fail, when our query errors spike, that’s when this data becomes critical to keeping our applications up and running.
When everything is running smoothly, our database over in APM might look something like this:
But when things start to fail, our Service Map highlights issues in red:
And we can dive into traces related to specific error spikes:
The stacktrace helps us get to the root cause of errors. In this case, we had an improperly specified connection string, and we can even see the exact line where an exception was thrown:
With quick, at-a-glance insight into service and database issues, we easily jumped into the code, restored our database connection issue, and got our service back up and running so our users could carry on with enjoying our application.
Wrap Up
Monitoring the datastores that back our applications is critical for improved performance, resiliency, and user experience. We can easily configure the OpenTelemtry Collector to receive MariaDB telemetry data and export this data to a backend observability platform for visibility and proactive detection of anomalies that could impact end users. Want to try out exporting your MariaDB data to a backend observability platform? Try Splunk Observability Cloud free for 14 days!
Resources
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.