Monitoring Splunk

Splunkd crash after upgrade to 5.0.1

rush05
Engager

Hi, we just upgraded splunk from 4.3.1 to 5.0.1 and now every time we start splunk, splunkd crashes.

Here is the text from the crashlog:

[build 143156] 2013-01-21 11:29:10
Received fatal signal 6 (Abort).
 Cause:
   Unknown signal origin (si_code=-1).
 Crashing thread: MainTailingThread
 Registers:
    RIP:  [0xFFFFFD7FFEAE314A] __lwp_kill + 10 (/lib/amd64/libc.so.1)
    RDI:  [0x0000000000000032]
    RSI:  [0x0000000000000006]
    RBP:  [0xFFFFFD7FF53FF350]
    RSP:  [0xFFFFFD7FF53FF348]
    RAX:  [0x0000000000000000]
    RBX:  [0x0000000000000006]
    RCX:  [0x0000000000000005]
    RDX:  [0xFFFFFEBFB492CBE0]
    R8:  [0x0000000000000001]
    R9:  [0x0000000000000000]
    R10:  [0x0000000000000005]
    R11:  [0x0000000000000020]
    R12:  [0x00000000000000E5]
    R13:  [0x0000000001CC4E98]
    R14:  [0x0000000001AEA230]
    R15:  [0xFFFFFD7FF53FF840]
    RFL:  [0x0000000000000286]
    TRAPNO:  [0x000000000000000E]
    ERR:  [0x0000000000000004]
    CS:  [0x000000000000004B]
    GS:  [0x0000000000000000]
    FS:  [0x0000000000000000]

 OS: SunOS
 Arch: x86-64

 Backtrace:
  [0xFFFFFFFFFFFFFFFF] ?
  [0xFFFFFD7FFEA87F19] raise + 25 (/lib/amd64/libc.so.1)
  [0xFFFFFD7FFEA669CE] abort + 94 (/lib/amd64/libc.so.1)
  [0xFFFFFD7FFEA66C8E] _assert + 126 (/lib/amd64/libc.so.1)
  [0x00000000009D1148] _ZN16FileInputTracker10computeCRCEPm14FileDescriptorRK3Strll + 920 (/opt/splunk/bin/splunkd)
  [0x00000000009D1656] _ZN16FileInputTracker11fileHalfMd5EPm14FileDescriptorRK3Strll + 22 (/opt/splunk/bin/splunkd)
  [0x00000000009EA99A] _ZN3WTF13loadFishStateEb + 650 (/opt/splunk/bin/splunkd)
  [0x00000000009DDAB4] _ZN10TailReader8readFileER15WatchedTailFileP11TailWatcher + 148 (/opt/splunk/bin/splunkd)
  [0x00000000009DDCDE] _ZN11TailWatcher8readFileER15WatchedTailFile + 254 (/opt/splunk/bin/splunkd)
  [0x00000000009DFD92] _ZN11TailWatcher11fileChangedEP16WatchedFileStateRK7Timeval + 482 (/opt/splunk/bin/splunkd)
  [0x0000000000EBFC84] _ZN30FilesystemChangeInternalWorker15callFileChangedER7TimevalP16WatchedFileState + 148 (/opt/splunk/bin/splunkd)
  [0x0000000000EC1743] _ZN30FilesystemChangeInternalWorker12when_expiredERy + 499 (/opt/splunk/bin/splunkd)
  [0x0000000000F18634] _ZN11TimeoutHeap18runExpiredTimeoutsER7Timeval + 180 (/opt/splunk/bin/splunkd)
  [0x0000000000EBABBC] _ZN9EventLoop3runEv + 188 (/opt/splunk/bin/splunkd)
  [0x00000000009E6B70] _ZN11TailWatcher3runEv + 144 (/opt/splunk/bin/splunkd)
  [0x00000000009E6CD6] _ZN13TailingThread4mainEv + 278 (/opt/splunk/bin/splunkd)
  [0x0000000000F16012] _ZN6Thread8callMainEPv + 98 (/opt/splunk/bin/splunkd)
  [0xFFFFFD7FFEADD60B] _thr_slot_offset + 795 (/lib/amd64/libc.so.1)
  [0xFFFFFD7FFEADD840] smt_pause + 96 (/lib/amd64/libc.so.1)
 SunOS / dtslabmgt04 / 5.10 / Generic_147441-24 / i86pc
 Last few lines of stderr (may contain info on assertion failure, but also could be old):
    2013-01-18 10:10:25.499 -0500 Interrupt signal received
    2013-01-18 11:53:06.183 -0500 splunkd started (build 115073)
    2013-01-21 10:23:51.077 -0500 Interrupt signal received
    2013-01-21 11:26:26.127 -0500 splunkd started (build 143156)
    Assertion failed: bytesToHash < 1048576, file /opt/splunk/p4/splunk/branches/5.0.1/src/pipeline/input/FileInputTracker.cpp, line 229
    2013-01-21 11:27:50.445 -0500 splunkd started (build 143156)
    Assertion failed: bytesToHash < 1048576, file /opt/splunk/p4/splunk/branches/5.0.1/src/pipeline/input/FileInputTracker.cpp, line 229
    2013-01-21 11:29:08.706 -0500 splunkd started (build 143156)
    Assertion failed: bytesToHash < 1048576, file /opt/splunk/p4/splunk/branches/5.0.1/src/pipeline/input/FileInputTracker.cpp, line 229

Threads running: 40
argv: [splunkd -p 8089 start]
terminating...

If anyone can help, it would be much appreciated. This is on our test server, but we are supposed to roll it out into production soon.

Thanks!

1 Solution

jbsplunk
Splunk Employee
Splunk Employee

This is a known issue, SPL-58292, which has been reported to support and is presently being investigated. At this time, there aren't any workarounds.

View solution in original post

jbsplunk
Splunk Employee
Splunk Employee

This is a known issue, SPL-58292, which has been reported to support and is presently being investigated. At this time, there aren't any workarounds.

jbsplunk
Splunk Employee
Splunk Employee

You can file a case with support if you have an enterprise contract and they'll be able to provide you with additional details.

0 Karma

rush05
Engager

Thank you for your reply. Where would I go to find more details on SPL-58292?

0 Karma
Get Updates on the Splunk Community!

Introduction to Splunk Observability Cloud - Building a Resilient Hybrid Cloud

Introduction to Splunk Observability Cloud - Building a Resilient Hybrid Cloud  In today’s fast-paced digital ...

Observability protocols to know about

Observability protocols define the specifications or formats for collecting, encoding, transporting, and ...

Take Your Breath Away with Splunk Risk-Based Alerting (RBA)

WATCH NOW!The Splunk Guide to Risk-Based Alerting is here to empower your SOC like never before. Join Haylee ...