We will use strimzi for deploying a Kafka Cluster. Before deploying the Strimzi cluster operator, create a namespace called kafka
:
kubectl create namespace kafka
Apply the Strimzi install files, including ClusterRoles
, ClusterRoleBindings
and some Custom Resource Definitions (CRDs
). The CRDs define the schemas used for the custom resources (CRs, such as Kafka
, KafkaTopic
and so on) you will be using to manage Kafka clusters, topics and users.
kubectl create -f 'https://strimzi.io/install/latest?namespace=kafka' -n kafka
Follow the deployment of the Strimzi cluster operator:
kubectl get pod -n kafka --watch
You can also follow the operator’s log:
kubectl logs deployment/strimzi-cluster-operator -n kafka -f
Create a new Kafka custom resource to get a small persistent Apache Kafka Cluster with one node for Apache Zookeeper and Apache Kafka:
# Apply the `Kafka` Cluster CR file
kubectl apply -f https://strimzi.io/examples/latest/kafka/kafka-persistent-single.yaml -n kafka
Wait while Kubernetes starts the required pods, services, and so on:
kubectl wait kafka/my-cluster --for=condition=Ready --timeout=300s -n kafka
With the cluster running, run a simple producer to send messages to a Kafka topic (the topic is automatically created):
kubectl -n kafka run kafka-producer -ti --image=quay.io/strimzi/kafka:0.40.0-kafka-3.7.0 --rm=true --restart=Never -- bin/kafka-console-producer.sh --bootstrap-server my-cluster-kafka-bootstrap:9092 --topic my-topic
Once everything is set up correctly, you’ll see a prompt where you can type in your messages:
If you don't see a command prompt, try pressing enter.
>Hello
>I am Abhi
To receive them in a different terminal, run:
kubectl -n kafka run kafka-consumer -ti --image=quay.io/strimzi/kafka:0.40.0-kafka-3.7.0 --rm=true --restart=Never -- bin/kafka-console-consumer.sh --bootstrap-server my-cluster-kafka-bootstrap:9092 --topic my-topic --from-beginning
If everything works as expected, you’ll be able to see the message you produced in the previous step:
If you don't see a command prompt, try pressing enter.
>Hello
>I am Abhi
Our Cluster is ready. Now let’s set up Prometheus Kafka exporter to emit out metrics related to the Kafka cluster.
We will use prometheus-community helm charts. Let’s add the repo
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
Create a file called kafka_helm_values.yaml with service discovery config
kafkaServer:
- my-cluster-kafka-bootstrap:9092
annotations:
prometheus.io/scrape: "true"
prometheus.io/path: "/metrics"
prometheus.io/port: "9308"
service:
type: ClusterIP
port: 9308
labels:
clustername: my-cluster
annotations:
prometheus.io/scrape: "true"
prometheus.io/path: "/metrics"
prometheus.io/port: "9308"
To deploy, Run->
helm install --namespace kafka prom-kafka-expo-ns prometheus-community/prometheus-kafka-exporter -f kafka_helm_values.yaml
If you changed the clustername then please edit above yaml file. To view metrics http://localhost:9308/metrics
export POD_NAME=$(kubectl get pods --namespace kafka -l "app=prometheus-kafka-exporter,release=prom-kafka-expo-ns" -o jsonpath="{.items[0].metadata.name}")
echo $POD_NAME
kubectl port-forward $POD_NAME 9308:9308 -n kafka
Make sure you have deployed Cisco Cloud Observability as mentioned here:
If yes, then follow below->
Edit your kafka_helm_values.yaml file, And add below->
kafkaServer:
- my-cluster-kafka-bootstrap:9092
annotations:
prometheus.io/scrape: "true"
prometheus.io/path: "/metrics"
prometheus.io/port: "9308"
appdynamics.com/exporter_type: "kafka"
appdynamics.com/kafka_cluster_name: "my-cluster"
service:
type: ClusterIP
port: 9308
labels:
clustername: my-cluster
annotations:
prometheus.io/scrape: "true"
prometheus.io/path: "/metrics"
prometheus.io/port: "9308"
appdynamics.com/exporter_type: "kafka"
appdynamics.com/kafka_cluster_name: "my-cluster"
Afterward, run
helm upgrade --namespace kafka prom-kafka-expo-ns prometheus-community/prometheus-kafka-exporter -f kafka_helm_values.yaml
Wait about 5 minutes and check the logs of your OpenTelemetry Collector.
kubectl -n cco logs appdynamics-collectors-ss-appdynamics-otel-collector-0 | grep kafka
-> k8s.namespace.name: Str(kafka)
-> k8s.namespace.name: Str(kafka)
-> k8s.namespace.name: Str(kafka)
-> k8s.pod.labels: Map({"run":"kafka-consumer"})
-> k8s.pod.name: Str(kafka-consumer)
-> k8s.pod.containernames: Slice(["kafka-consumer"])
-> k8s.namespace.name: Str(kafka)
-> k8s.namespace.name: Str(kafka)
-> k8s.namespace.name: Str(kafka)
-> k8s.pod.containernames: Slice(["kafka-producer"])
-> k8s.pod.labels: Map({"run":"kafka-producer"})
-> k8s.pod.name: Str(kafka-producer)
logger.kafka.name = org.apache.kafka
logger.kafka.level = WARN
-> k8s.namespace.name: Str(kafka)
-> k8s.service.selector: Map({"strimzi.io/cluster":"my-cluster","strimzi.io/kind":"Kafka","strimzi.io/name":"my-cluster-kafka"})
-> k8s.service.labels: Map({"app.kubernetes.io/instance":"my-cluster","app.kubernetes.io/managed-by":"strimzi-cluster-operator","app.kubernetes.io/name":"kafka","app.kubernetes.io/part-of":"strimzi-my-cluster","strimzi.io/cluster":"my-cluster","strimzi.io/component-type":"kafka","strimzi.io/kind":"Kafka","strimzi.io/name":"my-cluster-kafka"})
-> k8s.service.name: Str(my-cluster-kafka-brokers)
-> k8s.namespace.name: Str(kafka)
-> k8s.secret.labels: Map({"modifiedAt":"1714427393","name":"prom-kafka-expo-ns","owner":"helm","status":"superseded","version":"1"})
-> k8s.namespace.name: Str(kafka)
-> k8s.secret.name: Str(sh.helm.release.v1.prom-kafka-expo-ns.v1)
-> k8s.namespace.name: Str(kafka)
On the UI, scroll to Observe -> Kafka Clusters
You should see the Topic listed.
Great work everyone!!