Getting Data In

Long Multiline events not breaking correctly

Genti
Splunk Employee
Splunk Employee

Customer has a log file that is failing to break correctly:
Some of the events in the file are single line events. Others are multiline.

Customer was using SHOULD_LINEMERGE = true and BREAK_ONLY_BEFORE = and this was working only partially. It was working ok for the single line events but not the multiline ones.

I changed and test a different props.conf configuration with SHOULD_LINEMERGE = false and LINE_BREAKER = .
This new configuration worked for almost 98% of the log file, however there was still a few events that would not break correctly.

The events for which this happens are very large events, with more then 400 lines. What can i do to make sure that these events also break correctly?

0 Karma
1 Solution

Genti
Splunk Employee
Splunk Employee

After much testing of the regex, (to make sure that it was not its fault) the only thing left to try was to actually find out how big exactly these events were.
Looking at splunkd.log and finding some string in the log (where it says that splunk was not able to parse the event at ) i was able to identify parts of the log where this was happening.
The event itself is about 800-900 lines and what is happening that it is breaking every 300 (approx) lines.

There is one attribute left to use in these cases, TRUNCATE - this makes sure that the event doesnt get truncated at the default size.

From the docs we have:

TRUNCATE = <non-negative integer>

    * Change the default maximum line length.
    * Set to 0 if you never want truncation (very long lines are, however, often a sign of garbage data).
    * Defaults to 10000. 

Setting TRUNCATE = 0 and restarting made the logfile finally break correctly.
NOTE: it appears the default 10,000 is not lines but characters..

Hope this helps someone out there as i did have a bit of a hard time finding out what was not working correctly.
Cheers,
.gz

View solution in original post

Genti
Splunk Employee
Splunk Employee

After much testing of the regex, (to make sure that it was not its fault) the only thing left to try was to actually find out how big exactly these events were.
Looking at splunkd.log and finding some string in the log (where it says that splunk was not able to parse the event at ) i was able to identify parts of the log where this was happening.
The event itself is about 800-900 lines and what is happening that it is breaking every 300 (approx) lines.

There is one attribute left to use in these cases, TRUNCATE - this makes sure that the event doesnt get truncated at the default size.

From the docs we have:

TRUNCATE = <non-negative integer>

    * Change the default maximum line length.
    * Set to 0 if you never want truncation (very long lines are, however, often a sign of garbage data).
    * Defaults to 10000. 

Setting TRUNCATE = 0 and restarting made the logfile finally break correctly.
NOTE: it appears the default 10,000 is not lines but characters..

Hope this helps someone out there as i did have a bit of a hard time finding out what was not working correctly.
Cheers,
.gz

gkanapathy
Splunk Employee
Splunk Employee

Correct, TRUNCATE does not measure lines. When you use SHOULD_LINEMERGE = false, then every event is a single "line", so counting these "lines" would not be useful.

0 Karma
Get Updates on the Splunk Community!

Enterprise Security Content Update (ESCU) | New Releases

In December, the Splunk Threat Research Team had 1 release of new security content via the Enterprise Security ...

Why am I not seeing the finding in Splunk Enterprise Security Analyst Queue?

(This is the first of a series of 2 blogs). Splunk Enterprise Security is a fantastic tool that offers robust ...

Index This | What are the 12 Days of Splunk-mas?

December 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...