Customer is ingesting a custom log file. with multi-line events using a Splunk Universal Forwarder which sends data to a Splunk Heavy Forwarder.
The events should contain 20 lines, starting with an event seperator (a series or dashes), a new line, a date, a new line, data payload, followed by a new line. A new event is written once every minute.
When the event is indexed it is seen as 3 seperate events, the data payload, the date and finally the event sepetator.
The customer had tried every combination of LINEBREAKER, SHOULDLINEMERGE, BREAKONLYBEFORE, BREAKONLYBEFOREDATE, MUSTNOTBREAKAFTER, TRUNCATE, MAX_EVENTS etc in props.conf on the Heavy Forwarder, but the event was always broken incorrectly on the heavy forwarder.
The solution was found in the documentation for inputs.conf in $SPLUNK_HOME/etc/system/README/inputs.conf.spec
multiline_event_extra_waittime = [true|false]
By default, the file monitor sends an event delimiter when:
It reaches EOF of a file it monitors and
Ihe last character it reads is a newline.
In some cases, it takes time for all lines of a multiple-line event to arrive.
Set to true to delay sending an event delimiter until the time that the file monitor closes the file, as defined by the 'time_before_close' setting,
to allow all event lines to arrive.
Defaults to false.
Setting "multilineeventextra_waittime = true" resolved the issue