Getting Data In

Why is my Splunk forwarder 6.3.3 crashing with "LineBreakingProcessor - Truncating line because limit" error?

amoldesai
Explorer

Hi,

Splunk forwarder crashed few times in the last two days after onboarding few new customers.

Please find below crash.log file and last few lines of splunkd.log.

Appreciate any help that you can provide here.

crash.log

[build f44afce176d0] 2017-01-22 22:29:42
Received fatal signal 6 (Aborted).
 Cause:
   Signal sent by PID 25480 running under UID 501.
 Crashing thread: structuredparsing
 Registers:
    RIP:  [0x0000003EB9A32625] gsignal + 53 (/lib64/libc.so.6)
    RDI:  [0x0000000000006388]
    RSI:  [0x00000000000063AF]
    RBP:  [0x00000000019618A8]
    RSP:  [0x00007F7F861FDA08]
    RAX:  [0x0000000000000000]
    RBX:  [0x00007F7F9DEBE000]
    RCX:  [0xFFFFFFFFFFFFFFFF]
    RDX:  [0x0000000000000006]
    R8:  [0xFEFEFEFEFEFEFEFF]
    R9:  [0x00007F7F9DF0DF60]
    R10:  [0x0000000000000008]
    R11:  [0x0000000000000206]
    R12:  [0x00000000019618E8]
    R13:  [0x0000000001961CA0]
    R14:  [0x00007F7F59AE2018]
    R15:  [0x0000000000000006]
    EFL:  [0x0000000000000206]
    TRAPNO:  [0x0000000000000000]
    ERR:  [0x0000000000000000]
    CSGSFS:  [0x0000000000000033]
    OLDMASK:  [0x0000000000000000]

OS: Linux
Arch: x86-64

 Backtrace:
  [0x0000003EB9A32625] gsignal + 53 (/lib64/libc.so.6)
  [0x0000003EB9A33E05] abort + 373 (/lib64/libc.so.6)
  [0x0000003EB9A2B74E] ? (/lib64/libc.so.6)
  [0x0000003EB9A2B810] __assert_perror_fail + 0 (/lib64/libc.so.6)
  [0x0000000000FEA0D7] _ZN18CsvStreamingParser28_trimToBeforeWsAndGotElementEm + 199 (splunkd)
  [0x0000000000FEB67A] _ZN18CsvStreamingParser5parseEPKcm + 4010 (splunkd)
  [0x0000000000A0ED57] _ZN14CsvLineBreaker7processER15CowPipelineDataP18PipelineDataVector + 71 (splunkd)
  [0x0000000000A0E264] _ZN21LineBreakingProcessor7executeER15CowPipelineDataP18PipelineDataVector + 196 (splunkd)
  [0x0000000000A0E906] _ZN21LineBreakingProcessor12executeMultiER18PipelineDataVectorPS0_ + 54 (splunkd)
  [0x0000000000DB30DF] _ZN8Pipeline4mainEv + 623 (splunkd)
  [0x00000000010A164E] _ZN6Thread8callMainEPv + 62 (splunkd)
  [0x0000003EB9E07AA1] ? (/lib64/libpthread.so.0)
  [0x0000003EB9AE893D] clone + 109 (/lib64/libc.so.6)
 Linux /  / 2.6.32-642.11.1.el6.x86_64 / #1 SMP Wed Oct 26 10:25:23 EDT 2016 / x86_64
 Last few lines of stderr (may contain info on assertion failure, but also could be old):
    2017-01-22 03:41:24.339 -0500 splunkd started (build f44afce176d0)
    2017-01-22 05:30:36.869 -0500 Interrupt signal received
    2017-01-22 05:31:49.737 -0500 splunkd started (build f44afce176d0)
    splunkd: /home/build/build-src/ember/src/util/CsvStreamingParser.cpp:940: void CsvStreamingParser::_trimToBeforeWsAndGotElement(size_t): Assertion `to_remove + extra <= _str.size()' failed.
    2017-01-22 08:35:39.667 -0500 splunkd started (build f44afce176d0)
    2017-01-22 11:21:52.000 -0500 Interrupt signal received
    2017-01-22 11:30:39.313 -0500 splunkd started (build f44afce176d0)
    splunkd: /home/build/build-src/ember/src/util/CsvStreamingParser.cpp:940: void CsvStreamingParser::_trimToBeforeWsAndGotElement(size_t): Assertion `to_remove + extra <= _str.size()' failed.

/etc/redhat-release: Red Hat Enterprise Linux Server release 6.7 (Santiago)

 glibc version: 2.12
 glibc release: stable
Last errno: 0
Threads running: 36
Runtime: 39543.666373s
argv: [splunkd -p 8089 restart]
Thread: "structuredparsing", did_join=0, ready_to_run=Y, main_thread=N
First 8 bytes of Thread token @0x7f7f87c1e210:
00000000  00 e7 1f 86 7f 7f 00 00                           |........|
00000008

x86 CPUID registers:

     0: 0000000B 756E6547 6C65746E 49656E69
     1: 00020651 02010800 82982203 0FABFBFF
     2: 55035A01 00F0B2FF 00000000 00CA0000
     3: 00000000 00000000 00000000 00000000
     4: 00000000 00000000 00000000 00000000
     5: 00000000 00000000 00000000 00000000
     6: 00000007 00000002 00000001 00000000
     7: 00000000 00000000 00000000 00000000
     8: 00000000 00000000 00000000 00000000
     9: 00000001 00000000 00000000 00000000
     A: 07300401 0000007F 00000000 00000000
     B: 00000000 00000000 000000CD 00000002
  80000000: 80000008 00000000 00000000 00000000
  80000001: 00000000 00000000 00000001 28100800
  80000002: 65746E49 2952286C 6F655820 2952286E
  80000003: 55504320 20202020 20202020 58202020
  80000004: 30383635 20402020 33332E33 007A4847
  80000005: 00000000 00000000 00000000 00000000
  80000006: 00000000 00000000 01006040 00000000
  80000007: 00000000 00000000 00000000 00000100
  80000008: 00003028 00000000 00000000 00000000
terminating...

splunkd.log file :

01-22-2017 22:29:42.914 -0500 WARN  FileClassifierManager - Some automatic additions to the learned app were squelched by learned_sourcetype_limit (1000) in limits.conf: some squelched sourcetypes: filename=/1481425575 st=1481425575 ; Use sourcetype rules in input stanzas or source:: rules in props.conf to control sourcetype assignment

01-22-2017 22:29:42.920 -0500 WARN  LineBreakingProcessor - Truncating line because limit of 10000 bytes has been exceeded with a line length >= 1409937 - data_source="/1481425575.gz", data_host="83101", data_sourcetype="np_syslog_tsv"

01-22-2017 22:29:42.935 -0500 WARN  LineBreakingProcessor - Truncating line because limit of 10000 bytes has been exceeded with a line length >= 2433365 - data_source="/1481425575.gz", data_host="83101", data_sourcetype="np_syslog_tsv"

01-22-2017 22:29:42.949 -0500 WARN  LineBreakingProcessor - Truncating line because limit of 10000 bytes has been exceeded with a line length >= 2354525 - data_source="/1481425575.gz", data_host="83101", data_sourcetype=“np_syslog_tsv"
0 Karma

hunters_splunk
Splunk Employee
Splunk Employee

Hi amoldesai,

Seems your line length has exceeded the default maximum of 10000 bytes.
There is a TRUNCATE setting in props.conf that sets the default maximum line length:

TRUNCATE =
* Change the default maximum line length (in bytes).
* Although this is in bytes, line length is rounded down when this would
otherwise land mid-character for multi-byte characters.
* Set to 0 if you never want truncation (very long lines are, however, often
a sign of garbage data).
* Defaults to 10000 bytes.

You can create a props.conf under your app's local folder to set your TRUNCATE to override the default line length.

For example, create $SPLUNK_HOME/etc/apps//local/props.conf, and set the following:

TRUNCATE = 3000000

Hope this helps. Thanks!
Hunter

0 Karma

amoldesai
Explorer

Thanks a lot for your suggestion.

I will try it out and let you know .

Thanks

Amol

0 Karma

amoldesai
Explorer

Thanks. It works. The Splunk forwarder instance does not crash now. However, the data does not get pushed to indexes because of the following issue.

01-22-2017 22:29:42.914 -0500 WARN FileClassifierManager - Some automatic additions to the learned app were squelched by learned_sourcetype_limit (1000) in limits.conf: some squelched sourcetypes: filename=/1481425575 st=1481425575 ; Use sourcetype rules in input stanzas or source:: rules in props.conf to control sourcetype assignment

0 Karma
Get Updates on the Splunk Community!

How to Monitor Google Kubernetes Engine (GKE)

We’ve looked at how to integrate Kubernetes environments with Splunk Observability Cloud, but what about ...

Index This | How can you make 45 using only 4?

October 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with this ...

Splunk Education Goes to Washington | Splunk GovSummit 2024

If you’re in the Washington, D.C. area, this is your opportunity to take your career and Splunk skills to the ...