Product News & Announcements
All the latest news and announcements about Splunk products. Subscribe and never miss an update!

SOC4Kafka - New Kafka Connector Powered by OpenTelemetry

pszkamruk
Splunk Employee
Splunk Employee

The new SOC4Kafka connector, built on OpenTelemetry, enables the collection of Kafka messages and forwards these events to Splunk. It serves as a replacement for the existing Kafka Connector (SC4Kafka) SOC4Kafka is designed to capture events published to pszkamruk_0-1759314999440.pngKafka topics and efficiently forward them to Splunk pszkamruk_1-1759314999440.png, SOC4Kafka empowers organizations to utilize Splunk's powerful analytics and visualization capabilities. This integration enables real-time monitoring, analysis, and valuable insights from collected event data.

Let’s start with the simple question: Why?

Why do we need a new connector if we already have one? 

At first, the answer may not seem simple, but once we explain it, it becomes clear.

There are a few factors:

  • Deployment model and security. The old SC4Kafka has some limitations. The biggest is that it must be installed on the customer's Kafka cluster. In some cases, this is a no-go because it raises security concerns. The new connector is a standalone application that can be installed and scaled according to customer needs and requirements.
  • Configuration and installation. The configuration of the old connector is also something we wanted to change. SC4Kafka uses a curl request with multiline JSON, while SOC4Kafka simplifies the installation process by using a YAML file instead. This gives us more flexibility and makes introducing changes easier.
  • OpenTelemetry alignment and simplification. Enhancing OpenTelemetry support for all new connectors aligns with the broader strategy of unifying and simplifying data acquisition. Replacing the legacy connector with SOC4Kafka aims to reduce onboarding friction and streamline data acquisition maintenance.

The goal is to simplify data acquisition from Kafka and provide an OpenTelemetry-compatible replacement for the existing SC4Kafka connector.

So what is the difference? How is it built now?

SOC4Kafka connector is built using OpenTelemetry Collector and it consists of several classes of pipeline components. The most important components for the Kafka Otel connector are: Receivers, Processors and Exporters.

SOC4Kafka building blocks

Receivers

Kafka receiver is used to fetch data from Kafka cluster. Detailed configuration of this receiver can be found under this link.

Processors

Processors are optional components that can be added to the data pipeline. They transform data before it is exported. Different processors perform actions specific to the settings of the given processor. These actions include batching, filtering, or dropping the data and more. More information about processors can be found under this link.

Exporters

Splunk HEC exporter is used to send data to the Splunk HEC index. Detailed configuration of this exporter can be found under this link.

pszkamruk_2-1759313995390.png

This looks complex. Is it really so simple to configure it?

Yes, it might seem complicated, but to be honest it is not. All you need to do is:

  1. Download the splunk-otel-collector binary.
  2. Prepare values.yaml file - configuration file which defines your environment building blocks.
  3. Run the connector.

Here is a more detailed installation guide: How to start with SOC4Kafka?

You can also use our quick start guide to get your hands on the connector and search through your Kafka events in Splunk events faster. 

Basic Configuration Example:

receivers:
  kafka:
    brokers: [<Brokers>]
    topic: <Topic>
    encoding: <Encoding>

processors:
  resourcedetection:
    detectors: ["system"]
    system:
      hostname_sources: ["os"]
  batch:

exporters:
  splunk_hec:
    token: <Splunk HEC Token>
    endpoint: <Splunk HEC Endpoint>
    source: <Source>
    sourcetype: <Sourcetype>
    index: <Splunk index>

service:
  pipelines:
    logs:
      receivers: [kafka]
      processors: [batch, resourcedetection]
      exporters: [splunk_hec]


This of course can be updated with more complex features like: 

  • Collecting data from multiple topics
  • Extracting metadata from headers
  • Extracting timestamps from events
  • Regex topic matching

Everything is described in detail in Advanced Configuration section

What If I am using the existing SC4Kafka connector? Is there a way to migrate to the new one?

Yes, you can migrate from SC4Kafka to SOC4Kafka, in order to do that follow migration steps described here.

What about supported products?

Splunk OpenTelemetry Connector for Kafka lets you subscribe to a Kafka topic and stream the data to the Splunk HTTP event collector on the following technologies:

  • Apache Kafka 
  • Amazon Managed Streaming for Apache Kafka (Amazon MSK)
  • Confluent Platform

Conclusion: 

Customers have faced challenges managing multiple instances of the old Splunk Connect for Kafka, particularly because the previous solution required direct installation on production Kafka instances, posing potential security risks. The new Splunk OpenTelemetry Collector for Kafka addresses these concerns by offering a more secure and manageable solution. New SOC4Kafka enables standalone installation, meaning it can be deployed independently, separating customer infrastructure from Splunk monitoring solutions.

Get Updates on the Splunk Community!

SOC4Kafka - New Kafka Connector Powered by OpenTelemetry

The new SOC4Kafka connector, built on OpenTelemetry, enables the collection of Kafka messages and forwards ...

Your Voice Matters! Help Us Shape the New Splunk Lantern Experience

Splunk Lantern is a Splunk customer success center that provides advice from Splunk experts on valuable data ...

Building Momentum: Splunk Developer Program at .conf25

At Splunk, developers are at the heart of innovation. That’s why this year at .conf25, we officially launched ...