Security

LineBreakingProcessor - Truncating line because limit of 10000 has been exceeded

lisaac
Path Finder

I have an Oracle Diagnostic Log that exceeds 10K characters. I am wondering which option in limits.conf allows for an adjustment for an elimination of a warning message in splunkd.log.

In props.conf, I have the following:

[odl_stdout]
BREAK_ONLY_BEFORE = ^[2
SHOULD_LINEMERGE = true

I am seeing the following errors in splunkd.log:

02-28-2012 17:01:54.229 +0000 WARN LineBreakingProcessor - Truncating line because limit of 10000 has been exceeded: 13356

02-28-2012 17:01:54.255 +0000 ERROR DatetimeInitUtils - Failure to process regex: ^[2

02-28-2012 17:02:27.337 +0000 ERROR DatetimeInitUtils - Failure to process regex: ^[2

I tried altering the following in limits.conf to no avail:

[kv]

maxchars = 20480

Any suggestions?

Tags (1)
1 Solution

jbsplunk
Splunk Employee
Splunk Employee

I am pretty sure i know the setting you are looking for, see props.conf.spec:

http://docs.splunk.com/Documentation/Splunk/latest/Admin/Propsconf

#******************************************************************************
# Line breaking
#******************************************************************************

# Use the following attributes to define the length of a line.

TRUNCATE = <non-negative integer>
 * Change the default maximum line length (in bytes).
 * Although this is in bytes, line length is rounded down when this would
  otherwise land mid-character for multi-byte characters.
 * Set to 0 if you never want truncation (very long lines are, however, often a sign of
  garbage data).
 * Defaults to 10000 bytes.

You need to increase this value to something above 13356, and you probably want to give yourself some breathing room, so maybe start with 15k if you'll be pulling in similar messages moving forward.

View solution in original post

jbsplunk
Splunk Employee
Splunk Employee

I suspect your regex is also incorrect, you probably want to use something like:

^\[2

lisaac
Path Finder

I figured this would work. I remember this value from a past query, but I have not used it in a while. I added TRUNCATE=0 for testing to the local props.conf file on the indexer. The interesting thing, is that this did not work.

The props.conf file entry follows:

[odl_stdout]

TRUNCATE=0

BREAK_ONLY_BEFORE = ^[2

SHOULD_LINEMERGE = true

The errors persist:

02-28-2012 18:40:37.625 +0000 WARN LineBreakingProcessor - Truncating line because limit of 10000 has been exceeded: 13356

02-28-2012 18:40:38.614 +0000 ERROR DatetimeInitUtils - Failure to process regex: ^[2

I may have to review the data on the source host.

jbsplunk
Splunk Employee
Splunk Employee

I am pretty sure i know the setting you are looking for, see props.conf.spec:

http://docs.splunk.com/Documentation/Splunk/latest/Admin/Propsconf

#******************************************************************************
# Line breaking
#******************************************************************************

# Use the following attributes to define the length of a line.

TRUNCATE = <non-negative integer>
 * Change the default maximum line length (in bytes).
 * Although this is in bytes, line length is rounded down when this would
  otherwise land mid-character for multi-byte characters.
 * Set to 0 if you never want truncation (very long lines are, however, often a sign of
  garbage data).
 * Defaults to 10000 bytes.

You need to increase this value to something above 13356, and you probably want to give yourself some breathing room, so maybe start with 15k if you'll be pulling in similar messages moving forward.

Get Updates on the Splunk Community!

Platform Newsletter Highlights | March 2023

 March 2023 | Check out the latest and greatestIntroducing Splunk Edge Processor, simplified data ...

Enterprise Security Content Updates (ESCU) - New Releases

In the last month, the Splunk Threat Research Team (STRT) has had 3 releases of new content via the Enterprise ...

Thought Leaders are Validating Your Hard Work and Training Rigor

As a Splunk enthusiast and member of the Splunk Community, you are one of thousands who recognize the value of ...