OpenTelemetry keeps getting easier to adopt — not just in the cloud, but everywhere.
Splunk and Dash0 just contributed the OpenTelemetry Injector to the community: a new way to automatically instrument host-based applications with zero code changes.
In this post, we break down how it works, how it differs from the Operator and Automatic Discovery, and why it’s a big step forward for hybrid observability across on-prem, VMs, and cloud-native workloads.
Once you’ve enabled Automatic Discovery in your Kubernetes environment, the real power comes from how you use it. In this post, we’ll explore practical examples of monitoring databases, caches, and entire application stacks using the Splunk Distribution of the OpenTelemetry Collector. See how to apply Automatic Discovery to complex, real-world scenarios with minimal configuration and maximum visibility in Splunk Observability Cloud.
moreAutomatic Discovery removes a lot of the manual toil from an observability setup, but getting the configuration right ensures you reap all its benefits. In this post, we’ll walk through how to enable Automatic Discovery in Kubernetes using Helm, plus best practices for configuration, security, and scaling it across environments.
moreSetting up observability for dynamic environments like Kubernetes can be tedious and error-prone – but it doesn’t have to be. Automatic Discovery in the Splunk Distribution of the OpenTelemetry Collector simplifies observability by automatically detecting new services, generating the right monitoring configuration snippets, and sending metrics to Splunk Observability Cloud in real time.
moreWondering what all the buzz around Observability is all about? The best way to find out is to try out Splunk Observability yourself, with this fully functioning CNCF Observability demo that can be deployed in just a few minutes.
moreIs your critical legacy application a black box, leaving you in the dark about its performance and health? In this article you will discover how OpenTelemetry can illuminate even the most mysterious systems. This article demonstrates how to leverage the Splunk OpenTelemetry Collector and Splunk Cloud to extract vital metrics, logs, and traces from your legacy applications - all without touching a single line of code. Learn the simple, three-step process to gain actionable insights, troubleshoot issues proactively, and make data-driven decisions for your essential, yet often overlooked, systems.
moreIn this series, we're exploring manual instrumentation. We've successfully instrumented both the backend and frontend of our full-stack application. Now, let's take our observability to the next level by implementing the OpenTelemetry Collector as a central telemetry gateway. With just a few commits, we'll transform our direct-to-cloud architecture into a more scalable, secure, and maintainable solution using the OpenTelemetry Collector.
moreExtending your observability practice to include your LLM services is just as easy as implementing the practice with any other application or service. In this post, we’ll see how Splunk Observability Cloud can help us troubleshoot a noisy neighbor problem with one of our LLMs.
moreIn this series, we're exploring manual instrumentation. We first looked at what manual instrumentation is and why you might use it. We then went step-by-step through manually instrumenting the backend of our full-stack application with OpenTelemetry and Splunk Observability Cloud. In this post, we'll check out how we can implement manual instrumentation in the frontend of our full-stack application with Splunk Observability Cloud. With just a few commits, we'll take an un-instrumented frontend from zero to full observability with Splunk Real User Monitoring (RUM).
moreIn this series, we're exploring manual instrumentation. We first looked at what manual instrumentation is and why you might use it.
In this post, we'll check out how we can implement manual instrumentation in the backend of our full-stack application. In just four commits, we'll take an un-instrumented code base from zero to full observability with OpenTelemetry and Splunk Observability Cloud.
moreIn this series, we’ll explore manual instrumentation. We’ll first look at what manual instrumentation is and why you might use it, then in subsequent posts, we’ll check out how we can implement it. With just a few commits, we’ll take an un-instrumented code base from zero to full observability with OpenTelemetry and Splunk Observability Cloud.
moreLearn the how-to of self-service observability with Splunk Observability Cloud – from team setup and access controls to OpenTelemetry standards, automation, and common pitfalls.
moreLower configuration maintenance costs and future-proof Kubernetes observability with the OpenTelemetry Collector’s new feature that declaratively configures the K8s observer and receiver creators for automatic workload discovery.
moreThe whats, whys, and hows of converting logs to metrics using the OpenTelemetry Collector.
moreAn overview of agent saturation, how to measure it, and how to increase it using the Splunk Add-on for OpenTelemetry Collector.
moreDemystify observability protocols with our breakdown of some of the most popular.
moreLearn how to integrate Amazon Elastic Kubernetes Service (EKS) with Splunk Observability Cloud to unify your observability solution, more easily detect incidents, and resolve them faster (without having to navigate between different observability platforms).
moreWe’ve looked at configuring the OpenTelemetry Collector to receive and export PostgreSQL telemetry data, now let’s look at how to do the same for MariaDB (and MySQL).
moreInterested in improving the power of your metrics for better application reliability and performance? Want to reduce backend observability storage costs while doing it? Check out these best practices for managing metric data pipelines and minimizing metric data noise using OpenTelemetry processors.
moreConfigure your OpenTelemetry Collector to receive and export PostgreSQL telemetry data for improved performance, resiliency, and user experience.
moreMy role as an Observability Specialist at Splunk provides me with the opportunity to work with customers of all sizes as they implement OpenTelemetry in their organizations.
If you read my earlier article, 3 Things I Love About OpenTelemetry, you'll know that I'm a huge fan of OpenTelemetry. But like any technology, there's always room for improvement.
In this article, I'll share three areas where I think OpenTelemetry could be improved to make it even better.
moreIn 2022, I made the decision to focus my career on OpenTelemetry. I was excited by the technology and, after working with proprietary APM agent technology for nearly a decade, I believed that it was the future of instrumentation.
This ultimately led me to join Splunk in 2023 as an Observability Specialist. Splunk Observability Cloud is OpenTelemetry-native, so this role allowed me to work extensively with OpenTelemetry as customers of all sizes implemented it in their organizations.
So how am I feeling about OpenTelemetry in 2024? Well, I’m even more excited about it than before! In this article, I’ll share the top three things that I love about OpenTelemetry.
moreSpinnaker is an open-source, multi-cloud continuous delivery platform composed of a series of microservices, each performing a specific function. Understanding the performance and health of these individual components is critical for maintaining a robust Spinnaker environment. Read on for the details!
moreThis article is a code-based discussion of passing OpenTelemetry trace context across STOMP protocol pub/sub with a brokered websocket. This example uses Spring Boot for most components and leverages OpenTelemetry's APIs for manual instrumentation.
moreWhat happens if the OpenTelemetry collector cannot send data? Will it drop, queue in memory or on disk? Let's find out which settings are available and how they work!
more