Security

LineBreakingProcessor - Truncating line because limit of 10000 has been exceeded

lisaac
Path Finder

I have an Oracle Diagnostic Log that exceeds 10K characters. I am wondering which option in limits.conf allows for an adjustment for an elimination of a warning message in splunkd.log.

In props.conf, I have the following:

[odl_stdout]
BREAK_ONLY_BEFORE = ^[2
SHOULD_LINEMERGE = true

I am seeing the following errors in splunkd.log:

02-28-2012 17:01:54.229 +0000 WARN LineBreakingProcessor - Truncating line because limit of 10000 has been exceeded: 13356

02-28-2012 17:01:54.255 +0000 ERROR DatetimeInitUtils - Failure to process regex: ^[2

02-28-2012 17:02:27.337 +0000 ERROR DatetimeInitUtils - Failure to process regex: ^[2

I tried altering the following in limits.conf to no avail:

[kv]

maxchars = 20480

Any suggestions?

Tags (1)
1 Solution

jbsplunk
Splunk Employee
Splunk Employee

I am pretty sure i know the setting you are looking for, see props.conf.spec:

http://docs.splunk.com/Documentation/Splunk/latest/Admin/Propsconf

#******************************************************************************
# Line breaking
#******************************************************************************

# Use the following attributes to define the length of a line.

TRUNCATE = <non-negative integer>
 * Change the default maximum line length (in bytes).
 * Although this is in bytes, line length is rounded down when this would
  otherwise land mid-character for multi-byte characters.
 * Set to 0 if you never want truncation (very long lines are, however, often a sign of
  garbage data).
 * Defaults to 10000 bytes.

You need to increase this value to something above 13356, and you probably want to give yourself some breathing room, so maybe start with 15k if you'll be pulling in similar messages moving forward.

View solution in original post

jbsplunk
Splunk Employee
Splunk Employee

I suspect your regex is also incorrect, you probably want to use something like:

^\[2

lisaac
Path Finder

I figured this would work. I remember this value from a past query, but I have not used it in a while. I added TRUNCATE=0 for testing to the local props.conf file on the indexer. The interesting thing, is that this did not work.

The props.conf file entry follows:

[odl_stdout]

TRUNCATE=0

BREAK_ONLY_BEFORE = ^[2

SHOULD_LINEMERGE = true

The errors persist:

02-28-2012 18:40:37.625 +0000 WARN LineBreakingProcessor - Truncating line because limit of 10000 has been exceeded: 13356

02-28-2012 18:40:38.614 +0000 ERROR DatetimeInitUtils - Failure to process regex: ^[2

I may have to review the data on the source host.

jbsplunk
Splunk Employee
Splunk Employee

I am pretty sure i know the setting you are looking for, see props.conf.spec:

http://docs.splunk.com/Documentation/Splunk/latest/Admin/Propsconf

#******************************************************************************
# Line breaking
#******************************************************************************

# Use the following attributes to define the length of a line.

TRUNCATE = <non-negative integer>
 * Change the default maximum line length (in bytes).
 * Although this is in bytes, line length is rounded down when this would
  otherwise land mid-character for multi-byte characters.
 * Set to 0 if you never want truncation (very long lines are, however, often a sign of
  garbage data).
 * Defaults to 10000 bytes.

You need to increase this value to something above 13356, and you probably want to give yourself some breathing room, so maybe start with 15k if you'll be pulling in similar messages moving forward.

Get Updates on the Splunk Community!

.conf24 | Registration Open!

Hello, hello! I come bearing good news: Registration for .conf24 is now open!   conf is Splunk’s rad annual ...

ICYMI - Check out the latest releases of Splunk Edge Processor

Splunk is pleased to announce the latest enhancements to Splunk Edge Processor.  HEC Receiver authorization ...

Introducing the 2024 SplunkTrust!

Hello, Splunk Community! We are beyond thrilled to announce our newest group of SplunkTrust members!  The ...