The new SOC4Kafka connector, built on OpenTelemetry, enables the collection of Kafka messages and forwards these events to Splunk. It serves as a replacement for the existing Kafka Connector (SC4Kafka) SOC4Kafka is designed to capture events published to Kafka topics and efficiently forward them to Splunk
, SOC4Kafka empowers organizations to utilize Splunk's powerful analytics and visualization capabilities. This integration enables real-time monitoring, analysis, and valuable insights from collected event data.
At first, the answer may not seem simple, but once we explain it, it becomes clear.
There are a few factors:
The goal is to simplify data acquisition from Kafka and provide an OpenTelemetry-compatible replacement for the existing SC4Kafka connector.
SOC4Kafka connector is built using OpenTelemetry Collector and it consists of several classes of pipeline components. The most important components for the Kafka Otel connector are: Receivers, Processors and Exporters.
Kafka receiver is used to fetch data from Kafka cluster. Detailed configuration of this receiver can be found under this link.
Processors are optional components that can be added to the data pipeline. They transform data before it is exported. Different processors perform actions specific to the settings of the given processor. These actions include batching, filtering, or dropping the data and more. More information about processors can be found under this link.
Splunk HEC exporter is used to send data to the Splunk HEC index. Detailed configuration of this exporter can be found under this link.
Yes, it might seem complicated, but to be honest it is not. All you need to do is:
Here is a more detailed installation guide: How to start with SOC4Kafka?
You can also use our quick start guide to get your hands on the connector and search through your Kafka events in Splunk events faster.
Basic Configuration Example:
receivers:
kafka:
brokers: [<Brokers>]
topic: <Topic>
encoding: <Encoding>
processors:
resourcedetection:
detectors: ["system"]
system:
hostname_sources: ["os"]
batch:
exporters:
splunk_hec:
token: <Splunk HEC Token>
endpoint: <Splunk HEC Endpoint>
source: <Source>
sourcetype: <Sourcetype>
index: <Splunk index>
service:
pipelines:
logs:
receivers: [kafka]
processors: [batch, resourcedetection]
exporters: [splunk_hec]
This of course can be updated with more complex features like:
Everything is described in detail in Advanced Configuration section
Yes, you can migrate from SC4Kafka to SOC4Kafka, in order to do that follow migration steps described here.
Splunk OpenTelemetry Connector for Kafka lets you subscribe to a Kafka topic and stream the data to the Splunk HTTP event collector on the following technologies:
Customers have faced challenges managing multiple instances of the old Splunk Connect for Kafka, particularly because the previous solution required direct installation on production Kafka instances, posing potential security risks. The new Splunk OpenTelemetry Collector for Kafka addresses these concerns by offering a more secure and manageable solution. New SOC4Kafka enables standalone installation, meaning it can be deployed independently, separating customer infrastructure from Splunk monitoring solutions.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.