How to observe Kubernetes deployment of OpenTelemetry demo app in Splunk AppDynamics
- Introduction
- Backstory: Splunk AppDynamics and OpenTelemetry demo
- Setting up Kubernetes Cluster
- Creating Splunk AppDynamics Credentials
- Deploying OtelDemo on K8s and observing it in Splunk AppDynamics
- Conclusion
Introduction
As per a recent CNCF Blog post, Kubernetes and OpenTelemetry are the 1st and 2nd Open Source projects in terms of Project Velocity. Since it’s launch over 10 years ago, Kubernetes® has become the standard platform in the software industry for managing containerized applications across a cluster of servers. For newcomers to the observability domain, OpenTelemetry™ provides a standard way to collect telemetry data (metric, logs and traces) from software applications and infrastructure and send it to one or more backends to analyze performance. The backends can be open source (Jaeger or Zipkin, for example), commercial (such as Splunk AppDynamics, Splunk Observability) or both.
To enable faster adoption and showcase instrumentation best practices, the OTel community has built a demo application, OpenTelemetry Community Demo. In this blog, I'll show how to configure the Kubernetes deployment of OpenTelemetry demo to send Trace data to Splunk AppDynamics for further analysis. If you are interested in observing Docker compose deployment of OpenTelemetry Demo application in Splunk AppDynamics then please refer to this other article.
Backstory: Splunk AppDynamics and OpenTelemetry demo
Splunk AppDynamics provides full stack observability of hybrid and on-prem applications and their impact on business performance. In addition to proprietary ingestion format, AppDynamics also supports OpenTelemetry trace ingestion from various language agents (Java, dotnet, python, golang etc.) giving customers more options in how they want to ingest telemetry data.
OpenTelemetry Community Demo is a simulated version of an eCommerce store selling astronomy equipment. The app consists of 14+ microservices communicating with each other via http or grpc. The microservices are built using a variety of programming languages (Java, Javascript, C#, etc.) and instrumented using OpenTelemetry (auto, manual or both). The diagram below shows the data flow and programming languages used.
(Image credit: OpenTelemetry Demo contributors.)
In addition to the microservices shown here, the demo app also comes with supporting components such as OpenTelemetry Collector, Grafana, Prometheus and Jaeger to export and visualize traces, metrics and so on. The OpenTelemetry Collector is highly configurable. Once exporters for various backends are defined and enabled in the service pipeline, the Collector can be set up to send telemetry data to multiple backends simultaneously. The diagram below shows the OTel demo with supporting components, as well as a dotted line to Splunk AppDynamics, which we will configure in the next section.
Setting up Kubernetes Cluster
- Make sure you have access to a Kubernetes (K8s) cluster via kubectl. The cluster can be running locally (e.g. via kind, minikube, Docker Desktop with Kubernetes enabled) or remotely on a cloud provider (e.g. EKS, GKE or AKS). In case you don’t have a readily available K8s cluster you can follow below steps to create it on your local Mac using kind:
brew install kind kind create cluster --name otel-demo
- Confirm access to your K8s cluster by running below command:
kubectl get pods -A
- It should output below text if connection is successful
- Using helm, deploy OpenTelemetry demo app in your K8s cluster ( Ref. OpenTelemetry Demo Kubernetes Deployment documentation)
brew install helm helm repo add open-telemetry https://open-telemetry.github.io/opentelemetry-helm-charts helm install my-otel-demo open-telemetry/opentelemetry-demo kubectl port-forward svc/my-otel-demo-frontend-proxy 8080:8080
- Confirm otel demo app is working well by going to http://localhost:8080/ and completing an item checkout workflow.
Creating Splunk AppDynamics Credentials
- Contact your Splunk AppDynamics account representative to set up an AppDynamics account for your company. The account will have a URL format similar to https://<your-company/account-name>.saas.appdynamics.com and will be the central location where you'll see all telemetry data from your applications.
- Generate API Key by going to your AppDynamics url > Otel > Get Started > Access Key.
- Go to the Processors, Exporters and Service Configuration section and note down the values of below Keys. We will use them in the next section:
- appdynamics.controller.account
- appdynamics.controller.host
Deploying OtelDemo on K8s and observing it in Splunk AppDynamics
- Next, we create a custom helm values file otel-col-appd.yaml (Ref. bring-your-own-backend) to send trace data to Splunk AppDynamics. As described below, update the file otel-col-appd.yaml in your local Mac. Alternatively, copy the GitHub gist. Make sure there are no yaml validation errors by opening this file in an IDE with yaml support (VSCode etc.). The contents of this file get merged with default helm configuration at runtime to create the consolidated OpenTelemetry Collector configuration.
opentelemetry-collector: config: processors: resource: attributes: - key: appdynamics.controller.account action: upsert value: "from AppD account url > Otel > Configuration > Processor section" - key: appdynamics.controller.host action: upsert value: "from AppD account url > Otel > Configuration > Processor section" - key: appdynamics.controller.port action: upsert value: 443 - key: service.namespace action: upsert value: appd-otel-demo-k8s-kind-mac #custom name for your App batch: timeout: 30s send_batch_size: 90 exporters: otlphttp/appdynamics: endpoint: "from AppD account url > Otel > Configuration > Exporter section" headers: {"x-api-key": "from AppD account url > Otel > Configuration > API Key"} service: pipelines: traces: receivers: [otlp] processors: [resource, batch] exporters: [otlp, spanmetrics, otlphttp/appdynamics]
- Using below steps install a new helm release in your K8s cluster:
helm uninstall my-otel-demo helm install appd-otel-demo open-telemetry/opentelemetry-demo --values otel-col-appd.yaml
- Wait for a few minutes until all the pods are running
kubectl port-forward svc/my-otel-demo-frontend-proxy 8080:8080
- Confirm you can access OpenTelemetry demo app UI at http://localhost:8080/
- Next, Login to your Splunk AppDynamics URL. You'll then see a service flow map that shows various microservices and the interactions between them.
- Click the Tree view to display key APM metrics such as Avg Response Time, Calls/min and Errors/min etc.
- An observability platform should be able to detect an increase in error rates of the microservices it’s monitoring. Fortunately, the OpenTelemetry demo has an error injection capability via feature flags to test this functionality. Go to the feature flag UI at http://localhost:8080/feature/ and enable the productCatalogFailure feature flag. This will cause the product catalog service to return an error to frontend service for a specific product ID and respond correctly to all other product IDs. Note the increase in error rate in home page. To view errors details, click on Troubleshoot > Errors > Error Transactions > Details. AppDynamics accurately captures the error reason as ”Product Catalog Feature Flag Enabled”. AppDynamics provides health rules and alerts functionality to respond quickly to such situations.
Conclusion
The OpenTelemetry Community Demo application is a valuable and safe tool for learning about OpenTelemetry and instrumentation best practices. In this blog, we showed how to configure the K8s deployment of demo app to send telemetry data to Splunk AppDynamics. We also explored some key Splunk AppDynamics features such as FlowMap, APM metrics, and an observed increase in error rates via a fault-injection scenario.