Splunk Enterprise

Why are events breaking using props conf?

roopeshetty
Path Finder
Hi Team,
 
I am collecting metrics using API calls for every 5 minutes , but all the metrics are coming as a single event as attached screen shotkafka.JPG 
I need to break these events as individuals (which ever events starting from text “confluent_kafka_”) . I have edited my props.conf as below but its not coming as expected still its coming as a single event. Can some one please guide me how to do it.
 
 
[source::kafka_metrics://kafka_metrics]
LINE_BREAKER = (confluent_kafka_)(\s)
SHOULD_LINEMERGE = false
Labels (1)
0 Karma

glc_slash_it
Path Finder

Try this config and then restart the Splunk instance.

LINE_BREAKER = ([\n\r]+)confluent_kafka_server_request_bytes\{
SHOULD_LINEMERGE = false
 
 
 
0 Karma

richgalloway
SplunkTrust
SplunkTrust

It looks like the LINE_BREAKER setting needs to be adjusted.  The current setting has Splunk looking for the string "confluent_kafka_" followed by a space, but that does not match what is in the event.  Additionally, the first capture group, which is always discarded by Splunk, appears to contain important information.

Try LINE_BREAKER = ()confluent_kafka_server_request_bytes\{

---
If this reply helps you, Karma would be appreciated.
0 Karma

roopeshetty
Path Finder

tried this but its not working.

0 Karma

richgalloway
SplunkTrust
SplunkTrust

"its not working" isn't helpful.

Did you restart the indexer(s)?  Are you looking at data that was onboarded after the change (and restart)?  Existing data won't change.

Can you share raw data (before Splunk touches it)?

---
If this reply helps you, Karma would be appreciated.
0 Karma

roopeshetty
Path Finder

yes I restarted it. 

below is the RAW data how its getting caputured. Below is a single event, but we need to split or break every line as a indivisual events .

 

-----------------------------------------------------------------------------------------------------------------------------------

# HELP confluent_kafka_server_request_bytes The delta count of total request bytes from the specified request types sent over the network. Each sample is the number of bytes sent since the previous data point. The count is sampled every 60 seconds.
# TYPE confluent_kafka_server_request_bytes gauge
confluent_kafka_server_request_bytes{kafka_id="trtrtrt",principal_id="4343fgfg",type="ApiVersions",} 0.0 1684127220000
confluent_kafka_server_request_bytes{kafka_id="trtrtrt",principal_id="5656ghgh",type="ApiVersions",} 0.0 1684127220000
confluent_kafka_server_request_bytes{kafka_id="trtrtrt",principal_id="5656ghgh",type="Fetch",} 18000.0 1684127220000
confluent_kafka_server_request_bytes{kafka_id="trtrtrt",principal_id="4343fgfg",type="Metadata",} 0.0 1684127220000
confluent_kafka_server_request_bytes{kafka_id="trtrtrt",principal_id="5656ghgh",type="Metadata",} 0.0 1684127220000
# HELP confluent_kafka_server_response_bytes The delta count of total response bytes from the specified response types sent over the network. Each sample is the number of bytes sent since the previous data point. The count is sampled every 60 seconds.
# TYPE confluent_kafka_server_response_bytes gauge
confluent_kafka_server_response_bytes{kafka_id="trtrtrt",principal_id="4343fgfg",type="ApiVersions",} 0.0 1684127220000
confluent_kafka_server_response_bytes{kafka_id="trtrtrt",principal_id="5656ghgh",type="ApiVersions",} 0.0 1684127220000
confluent_kafka_server_response_bytes{kafka_id="trtrtrt",principal_id="5656ghgh",type="Fetch",} 5040.0 1684127220000
confluent_kafka_server_response_bytes{kafka_id="trtrtrt",principal_id="4343fgfg",type="Metadata",} 0.0 1684127220000
confluent_kafka_server_response_bytes{kafka_id="trtrtrt",principal_id="5656ghgh",type="Metadata",} 0.0 1684127220000
confluent_kafka_server_response_bytes{kafka_id="yuuyu",principal_id="hyyu44",type="ApiVersions",} 0.0 1684127220000
confluent_kafka_server_response_bytes{kafka_id="yuuyu",principal_id="hyyu44",type="Metadata",} 0.0 1684127220000
# HELP confluent_kafka_server_received_bytes The delta count of bytes of the customer's data received from the network. Each sample is the number of bytes received since the previous data sample. The count is sampled every 60 seconds.
# TYPE confluent_kafka_server_received_bytes gauge
confluent_kafka_server_received_bytes{kafka_id="yuyuyu",topic="tytytytuyu-processing-log",} 0.0 1684127220000
confluent_kafka_server_received_bytes{kafka_id="yuyuyu",topic="hhhh.kk.json.cs.order.fulfilment.auto.pricing.retry",} 553822.0 1684127220000
confluent_kafka_server_received_bytes{kafka_id="yuyuyu",topic="hhhh.kk.json.cs.order.yy",} 0.0 1684127220000
confluent_kafka_server_received_bytes{kafka_id="uiuiui",topic="topic_0",} 35385.0 1684127220000
# HELP confluent_kafka_server_sent_bytes The delta count of bytes of the customer's data sent over the network. Each sample is the number of bytes sent since the previous data point. The count is sampled every 60 seconds.
# TYPE confluent_kafka_server_sent_bytes gauge
confluent_kafka_server_sent_bytes{kafka_id="uiuiuiu",topic="_confluent-controlcenter-7-3-0-0-MonitoringMessageAggregatorWindows-THREE_HOURS-changelog",} 0.0 1684127220000
confluent_kafka_server_sent_bytes{kafka_id="yuyuyuy",topic="kafka-connector.connect-configs",} 0.0 1684127220000
confluent_kafka_server_sent_bytes{kafka_id="yuyuyuy",topic="kafka.replicator-configs",} 0.0 1684127220000
confluent_kafka_server_sent_bytes{kafka_id="yuyuyuy",topic="tytyty-log",} 0.0 1684127220000
confluent_kafka_server_sent_bytes{kafka_id="yuyuyuy",topic="success-lcc-trttytyty",} 0.0 1684127220000
# HELP confluent_kafka_server_received_records The delta count of records received. Each sample is the number of records received since the previous data sample. The count is sampled every 60 seconds.
# TYPE confluent_kafka_server_received_records gauge
confluent_kafka_server_received_records{kafka_id="yuyuyu",topic="tytytytuyu-processing-log",} 0.0 1684127220000
confluent_kafka_server_received_records{kafka_id="yuyuyu",topic="hhhh.kk.json.cs.order.fulfilment.auto.pricing.retry",} 24.0 1684127220000
confluent_kafka_server_received_records{kafka_id="yuyuyu",topic="hhhh.kk.json.cs.order.yy",} 0.0 1684127220000
confluent_kafka_server_received_records{kafka_id="uiuiui",topic="topic_0",} 126.0 1684127220000
# HELP confluent_kafka_server_sent_records The delta count of records sent. Each sample is the number of records sent since the previous data point. The count is sampled every 60 seconds.
# TYPE confluent_kafka_server_sent_records gauge
confluent_kafka_server_sent_records{kafka_id="yuyuyu",topic="tytytytuyu-processing-log",} 0.0 1684127220000
confluent_kafka_server_sent_records{kafka_id="yuyuyu",topic="hhhh.kk.json.cs.order.fulfilment.auto.pricing.retry",} 24.0 1684127220000
confluent_kafka_server_sent_records{kafka_id="yuyuyu",topic="hhhh.kk.json.cs.order.yy",} 0.0 1684127220000
confluent_kafka_server_sent_records{kafka_id="uiuiuiu",topic="_confluent-command",} 0.0 1684127220000
confluent_kafka_server_sent_records{kafka_id="uiuiuiu",topic="_confluent-controlcenter-7-3-0-0-AlertHistoryStore-changelog",} 0.0 1684127220000
confluent_kafka_server_sent_records{kafka_id="yuyuyuy",topic="kafka-connector.connect-configs",} 0.0 1684127220000
confluent_kafka_server_sent_records{kafka_id="yuyuyuy",topic="kafka.replicator-configs",} 0.0 1684127220000
confluent_kafka_server_retained_bytes{kafka_id="uiuiuiu",topic="_confluent-controlcenter-7-3-0-0-monitoring-aggregate-rekey-store-repartition",} 0.0 1684127220000
confluent_kafka_server_retained_bytes{kafka_id="uiuiuiu",topic="_confluent-controlcenter-7-3-0-0-monitoring-message-rekey-store",} 7.148088E7 1684127220000
confluent_kafka_server_retained_bytes{kafka_id="uiuiuiu",topic="_confluent-controlcenter-7-3-0-0-monitoring-trigger-event-rekey",} 5.0344769E7 1684127220000

 

0 Karma

richgalloway
SplunkTrust
SplunkTrust

This works in my sandbox with the provided sample data.

LINE_BREAKER = ([\r\n]+)confluent_kafka_server_(request|response|received|sent|retained)_(bytes|records)
---
If this reply helps you, Karma would be appreciated.
0 Karma

roopeshetty
Path Finder

Hmm, not sure whats wrong with my setup, I have modified the props.conf as below and restarted the splunk service. Still its coming as a single event.

 

roopeshetty_0-1684157324535.png

 

0 Karma

richgalloway
SplunkTrust
SplunkTrust

I wonder if the problem lies with the stanza name rather than the LINE_BREAKER setting.  If the name in a source::// stanza doesn't match the name of the data source then the settings will not be applied.

A source named "kafka_metrics://kafka_metrics" seems like a rather odd file name.  Are you sure that's the correct source name?  How are you obtaining the name?

---
If this reply helps you, Karma would be appreciated.
0 Karma

roopeshetty
Path Finder

yes,  source="kafka_metrics://kafka_metrics" as you can see here below;

roopeshetty_0-1684166556001.png

 

0 Karma

richgalloway
SplunkTrust
SplunkTrust

Is that the *raw* source name, before any Splunk transforms?

Can you change the stanza to a sourcetype one, at least to confirm the regex is correct?

---
If this reply helps you, Karma would be appreciated.
0 Karma

roopeshetty
Path Finder

 

our sourcetype=kafka_metrics, now i have modified the props.conf as below, but still the same issue. Do i need to change anything in transforms.conf as well?

roopeshetty_0-1684211563555.png

 

0 Karma

ldongradi_splun
Splunk Employee
Splunk Employee

Any news ?

How did you solve that issue, if ever ?

0 Karma

richgalloway
SplunkTrust
SplunkTrust

The screenshot says "props".  Is that the actual file name?  It should be props.conf.

It's difficult to determine if changes need to be made to transforms.conf without seeing the file, but since props.conf does not reference any transforms I'd say "no".

I'm still waiting for an answer to this earlier question: Are you looking at data that was onboarded after the change (and restart)?

How is the script(?) that is making the API calls tagging the data as sourcetype=kafka_metrics?  How is the data being sent to Splunk?

---
If this reply helps you, Karma would be appreciated.
0 Karma
Get Updates on the Splunk Community!

Enterprise Security Content Update (ESCU) | New Releases

In December, the Splunk Threat Research Team had 1 release of new security content via the Enterprise Security ...

Why am I not seeing the finding in Splunk Enterprise Security Analyst Queue?

(This is the first of a series of 2 blogs). Splunk Enterprise Security is a fantastic tool that offers robust ...

Index This | What are the 12 Days of Splunk-mas?

December 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...