Getting Data In

Multiline Events sent from Universal Forwarder not breaking correctly

dshakespeare_sp
Splunk Employee
Splunk Employee

Customer is ingesting a custom log file. with multi-line events using a Splunk Universal Forwarder which sends data to a Splunk Heavy Forwarder.
The events should contain 20 lines, starting with an event seperator (a series or dashes), a new line, a date, a new line, data payload, followed by a new line. A new event is written once every minute.

When the event is indexed it is seen as 3 seperate events, the data payload, the date and finally the event sepetator.
The customer had tried every combination of LINE_BREAKER, SHOULD_LINEMERGE, BREAK_ONLY_BEFORE, BREAK_ONLY_BEFORE_DATE, MUST_NOT_BREAK_AFTER, TRUNCATE, MAX_EVENTS etc in props.conf on the Heavy Forwarder, but the event was always broken incorrectly on the heavy forwarder.

0 Karma

dshakespeare_sp
Splunk Employee
Splunk Employee

The solution was found in the documentation for inputs.conf in $SPLUNK_HOME/etc/system/README/inputs.conf.spec

multiline_event_extra_waittime = [true|false]

By default, the file monitor sends an event delimiter when:

It reaches EOF of a file it monitors and

Ihe last character it reads is a newline.

In some cases, it takes time for all lines of a multiple-line event to arrive.

Set to true to delay sending an event delimiter until the time that the file monitor closes the file, as defined by the 'time_before_close' setting,
to allow all event lines to arrive.

Defaults to false.

Setting "multiline_event_extra_waittime = true" resolved the issue

Get Updates on the Splunk Community!

Announcing Scheduled Export GA for Dashboard Studio

We're excited to announce the general availability of Scheduled Export for Dashboard Studio. Starting in ...

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics GA in US-AWS!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...