- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Multiple lines with the same time
I have logs from a custom application being streamed into splunk usinig a unverisal forwarder. The probelem I have there is multiple lines with the same time. See below.
19:12:51.790,16526719,TCP,2,3404,2226
19:12:51.790,66870655,TCP,10,53743,355114
19:12:51.790,199246079,TCP,5,2937,5715
19:12:51.790,281972991,TCP,2,55722,43156
19:12:51.790,282382591,TCP,11,2458,11480
I have extracted the fields of there data using the props.conf and the transforms.com files however when I do a search by what we call Cust_id it only pulls out the information from the first line logged for a time stamp. In the above example it would only find Cust_id = 16526719
How can I adjust my query to find the Cust_id per every line that is indexed in Splunk?
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thanks for the answer. It did work when I applied the line break to the indexer.
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I restarted Splunk and ran a real time search to see if the new index data would be line breaked. It did not address the issue.
FYI - I am using a search head and multiple indexer. I have only made the adjustment on the search head. Do I need to do this on the indexers as well?
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

YES! The LINE_BREAKER and SHOULD_LINEMERGE settings are effective only at index-time (read indexer) and ignored by the search head :). On the other hand, EXTRACT-cust_id is used by the search head to performs the extraction of the field at search time.
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I tried the both suggestions however I am unable to have Splunk break every line
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
and it will only apply to new data
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

dobarnes, did you restart splunk (indexer) after editing line breaking on props.conf?
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

It is more efficient, faster and easier for Splunk to break on every line than to line merge for each event. Try the following stanza on props.conf - it will both break the stream of data at every line AND extract a cust_id field per each event:
[my_sourcetype]
SHOULD_LINEMERGE=false
LINE_BREAKER=([\r\n]+)
EXTRACT-cust_id = (?i)\.\d{3},(?<cust_id>\d+),
Hope this helps.
> please upvote and accept answer if you find it useful - thanks!
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Do you need to keep it as multiline ? because i would get splunk to treat each line as single event and use delims to extract fields.
props.conf:
[your_data_sourcetype]
SHOULD_LINEMERGE=false
LINE_BREAKER=([\r\n]+)
REPORT-delims=commalist
transforms.conf:
[commalist]
DELIMS = ","
FIELDS = field1, field2, field3, ...
