Community Blog
Get the latest updates on the Splunk Community, including member experiences, product education, events, and more!

Monitoring MariaDB and MySQL

CaitlinHalla
Splunk Employee
Splunk Employee

In a previous post, we explored monitoring PostgreSQL and general best practices around which metrics to collect and monitor for database performance, health, and reliability. In this post, we’ll look at how to monitor MariaDB (and MySQL) so we can keep the data stores that back our business applications performant, healthy, and resilient. 

How to get the metrics

The first step to get metrics from MariaDB or MySQL (for the rest of this post, I’ll just mention MariaDB, but the same approach and receiver works for both) to Splunk Observability Cloud is to instrument the backend service(s) that are connected to your MariaDB(s). If you’re working with the Splunk Distribution of the OpenTelemetry Collector, you can follow the guided install docs. Here I’m using Docker Compose to set up my application, MariaDB service, and the Splunk Distribution of the OpenTelemetry Collector:  

docker compose yaml.png

Next, we’ll need to configure the OpenTelemetry Collector with a receiver to collect telemetry data from our MariaDB and an exporter to export data to our backend observability platform. If you already have an OpenTelemetry Collector configuration file, you can add the following configurations to that file. Since I set up the Collector using Docker Compose, I needed to create an empty otel-collector-config.yaml file. The mysql receiver is the MariaDB-compatible Collector receiver, so we’ll add it under the receivers block along with our Splunk Observability Cloud exporter under the exporters block. Here’s what our complete configuration looks like: 

otel config.png

As always, don’t forget to add the receivers and exporters to the service pipelines. 

Done! We can now build, start, or restart our service and see our database metrics flow into our backend observability platform. 

Visualizing the Data in Splunk Observability Cloud

With our installation and configuration of the OpenTelemetry Collector complete, we can now visualize our data in our backend observability platform, Splunk Observability Cloud. 

From within Application Performance Monitoring (APM), we can view all of our application services and get a comprehensive performance overview – including performance details around our MariaDB: 

apm overview landing.png

We can explore our Service Map to visualize our MariaDB instance and how it fits in with the other services in our application:

service map.png

And we can select our MariaDB instance to get deeper insight into performance metrics, requests, and errors: 

mariadb details from service map.png

If we scroll down in the Breakdown dropdown results on the right side of the screen, we can even get quick visibility into Database Query Performance, showing us how specific queries perform and seeing which queries are being executed: 

Database Query Latency scroll down.png

Clicking into Database Query Performance takes us to a view that can be sorted by total query response time, queries in the 90th percentile of latency, or total requests so we can quickly isolate queries that might be impacting our services and our users:

click into Database Query Performance.png

We can select specific queries from our Top Queries for more query detail, like the full query, requests & errors, latency, and query tags: 

query details full query.png

query details latency and tags.png

We can dig into specific traces related to high-latency database requests, letting us see how specific users were affected by database performance: 

trace related to high latency.png

And also see right down into the span performance:

span performance for trace.png

And the db.statement for the trace:

db.statement from trace.png

We can proactively use this information to further optimize our queries and improve user experience. You can also see that the database activity shows up in the overall trace waterfall, letting you get a full picture of how all components of the stack were involved in this transaction.

But how can this help us in an incident?

This is all helpful information and can guide us on our journeys to improve query performance and efficiency. But when our database connections fail, when our query errors spike, that’s when this data becomes critical to keeping our applications up and running. 

When everything is running smoothly, our database over in APM might look something like this: 

db overview from APM.png

db overview from APM2.png

But when things start to fail, our Service Map highlights issues in red: 

service map overview with mysql error.png

And we can dive into traces related to specific error spikes: 

failing trace.png

The stacktrace helps us get to the root cause of errors. In this case, we had an improperly specified connection string, and we can even see the exact line where an exception was thrown: 

failing trace stacktrace.png

With quick, at-a-glance insight into service and database issues, we easily jumped into the code, restored our database connection issue, and got our service back up and running so our users could carry on with enjoying our application. 

Wrap Up

Monitoring the datastores that back our applications is critical for improved performance, resiliency, and user experience. We can easily configure the OpenTelemtry Collector to receive MariaDB telemetry data and export this data to a backend observability platform for visibility and proactive detection of anomalies that could impact end users. Want to try out exporting your MariaDB data to a backend observability platform? Try Splunk Observability Cloud free for 14 days

Resources

Get Updates on the Splunk Community!

Say goodbye to manually analyzing phishing and malware threats with Splunk Attack ...

In today’s evolving threat landscape, we understand you’re constantly bombarded with phishing and malware ...

AppDynamics is now part of Splunk Ideas

Hello Splunkers, We have exciting news for you! AppDynamics has been added to the Splunk Ideas Portal. Which ...

Advanced Splunk Data Management Strategies

Join us on Wednesday, May 14, 2025, at 11 AM PDT / 2 PM EDT for an exclusive Tech Talk that delves into ...