Getting Data In

single event for mutiple lines

roopeshetty
Path Finder

Hi Team,

 

I am collecting metrics using API calls for every 5 minutes , but all the metrics are coming as a single event as below for every 5 minutes.

 

confluent_kafka_server_request_bytes{kafka_id="tythtyt",principal_id="sa-r29997",type="Fetch",} 2092668.0 1683872880000

confluent_kafka_server_request_bytes{kafka_id="tythtyt",principal_id="sa-9pyr8m",type="Metadata",} 1849.0 1683872880000

confluent_kafka_server_request_bytes{kafka_id="tythtyt",principal_id="sa-r29997",type="Metadata",} 66279.0 1683872880000

confluent_kafka_server_request_bytes{kafka_id="tythtyt",principal_id="u-09pr56",type="Metadata",} 0.0 1683872880000

confluent_kafka_server_response_bytes{kafka_id="rtrtt",principal_id="sa-y629ok",type="Fetch",} 5019.0 1683872880000

confluent_kafka_server_response_bytes{kafka_id="trtrt",principal_id="sa-8gg7jr",type="Metadata",} 0.0 1683872880000

confluent_kafka_server_memory{kafka_id="yyyy",topic="host002.json.cs.tt",} 1.0 1683872880000

confluent_kafka_server_memory{kafka_id="yyyy",topic="host002.json.cs.tt.enriched",} 1.0 1683872880000

confluent_kafka_server_memory{kafka_id="yyyy",topic="host002.json.cs.tt.fulfilment.auto",} 1.0 1683872880000

confluent_kafka_server_memory{kafka_id="yyyy",topic="host002.json.cs.tt.gg",} 0.0 1683872880000

 

 

I need to break these events as individuals (which ever events starting from text “confluent_kafka_”)  . I have edited my props.conf as below but its not coming as expected still its coming as a single event. Can some one please guide me how to do it.

 

[source::kafka_metrics://kafka_metrics]

LINE_BREAKER = (confluent_kafka_)(\s)

SHOULD_LINEMERGE = false

Labels (2)
0 Karma
Get Updates on the Splunk Community!

Data Management Digest – December 2025

Welcome to the December edition of Data Management Digest! As we continue our journey of data innovation, the ...

Index This | What is broken 80% of the time by February?

December 2025 Edition   Hayyy Splunk Education Enthusiasts and the Eternally Curious!    We’re back with this ...

Unlock Faster Time-to-Value on Edge and Ingest Processor with New SPL2 Pipeline ...

Hello Splunk Community,   We're thrilled to share an exciting update that will help you manage your data more ...