Splunk Search

Why is splunk crashing on lookup command?

gots
Path Finder

We have simple csv lookup like:

network,descr
192.168.0.0/24,network_name

Lookup description in transforms.conf:

[networklist_allocs_all]
filename = networklist_allocs_all.csv
max_matches = 1
min_matches = 1
default_match = OK
match_type = CIDR(network)

Any search command like:
...
| lookup networklist_allocs_all network AS src_ip
...
too often crashing splunk on the indexers. It is about 5-10 crashlogs on the each indexers per day.
Part of crashlog at the end of question.

How can we resolve this situation? Seems that splunk began to crash after update from 7 to 8 version. We did't any changes in lookup format or definition.
Unfortunately we can't open support case for some reason, so ask for community help.

crash-xx.log:

[build 6db836e2fb9e] 2020-02-13 17:00:56
Received fatal signal 11 (Segmentation fault).
 Cause:
   No memory mapped at address [0x0000000000000058].
 Crashing thread: BatchSearch
 Registers:
    RIP:  [0x0000564E2F44470F] _ZN14LookupMatchMap16mergeDestructiveERS_ + 31 (splunkd + 0x223670F)
    RDI:  [0x00007FC9C01FCA80]
    RSI:  [0x0000000000000000]
    RBP:  [0x0000000000000000]
    RSP:  [0x00007FC9C01FC9A0]
    RAX:  [0x00007FC9CDB3AE08]
    RBX:  [0x0000000000000000]
    RCX:  [0x0000000000000001]
    RDX:  [0x00007FC9BFB7F608]
    R8:  [0x00007FC9C01FC9C0]
    R9:  [0x00007FC9C01FC9BF]
    R10:  [0x0000000000000010]
    R11:  [0x0000000000000080]
    R12:  [0x00007FC9928E33C0]
    R13:  [0x00007FC9C01FCA80]
    R14:  [0xAAAAAAAAAAAAAAAB]
    R15:  [0x00007FC9BFB7F600]
    EFL:  [0x0000000000010246]
    TRAPNO:  [0x000000000000000E]
    ERR:  [0x0000000000000004]
    CSGSFS:  [0x0000000000000033]
    OLDMASK:  [0x0000000000000000]

 OS: Linux
 Arch: x86-64

 Backtrace (PIC build):
  [0x0000564E2F44470F] _ZN14LookupMatchMap16mergeDestructiveERS_ + 31 (splunkd + 0x223670F)
  [0x0000564E2F444EE8] _ZN14UnpackedResult8finalizeEv + 168 (splunkd + 0x2236EE8)
  [0x0000564E2F445E26] _ZN18LookupDataProvider6lookupERSt6vectorIP15SearchResultMemSaIS2_EERK17SearchResultsInfoR16LookupDefinitionPK22LookupProcessorOptions + 2118 (splunkd + 0x2237E26)
  [0x0000564E2F451B7F] _ZN12LookupDriver13executeLookupEP29IFieldAwareLookupDataProviderP15SearchResultMemR17SearchResultsInfoPK22LookupProcessorOptions + 367 (splunkd + 0x2243B7F)
  [0x0000564E2F451C22] _ZN18SingleLookupDriver7executeER18SearchResultsFilesR17SearchResultsInfoPK22LookupProcessorOptions + 98 (splunkd + 0x2243C22)
  [0x0000564E2F43BDFF] _ZN15LookupProcessor7executeER18SearchResultsFilesR17SearchResultsInfo + 79 (splunkd + 0x222DDFF)
  [0x0000564E2F08FC7D] _ZN15SearchProcessor16execute_dispatchER18SearchResultsFilesR17SearchResultsInfoRK3Str + 749 (splunkd + 0x1E81C7D)
  [0x0000564E2F07F528] _ZN14SearchPipeline7executeER18SearchResultsFilesR17SearchResultsInfo + 344 (splunkd + 0x1E71528)
  [0x0000564E2F1931D9] _ZN16MapPhaseExecutor15executePipelineER18SearchResultsFilesb + 153 (splunkd + 0x1F851D9)
  [0x0000564E2F193901] _ZN25BatchSearchExecutorThread13executeSearchEv + 385 (splunkd + 0x1F85901)
  [0x0000564E2DEDCB8F] _ZN20SearchExecutorThread4mainEv + 47 (splunkd + 0xCCEB8F)
  [0x0000564E2EC41AC8] _ZN6Thread8callMainEPv + 120 (splunkd + 0x1A33AC8)
  [0x00007FC9CC7B76BA] ? (libpthread.so.0 + 0x76BA)
  [0x00007FC9CC4EC41D] clone + 109 (libc.so.6 + 0x10741D)
 Linux / c-6.index.splunk / 4.4.0-21-generic / #37-Ubuntu SMP Mon Apr 18 18:33:37 UTC 2016 / x86_64
 /etc/debian_version: stretch/sid
...
Last errno: 0
Threads running: 13
Runtime: 436566.363455s
argv: [splunkd -p 8089 start]
Process renamed: [splunkd pid=8371] splunkd -p 8089 start [process-runner]
Process renamed: [splunkd pid=8371] search --id=remote_d-0.search.splunk_scheduler__gots_eWFuZGV4LWFsZXJ0cw__RMD5d361bb2fcae608a1_at_1581602460_31361_374F9CE8-43DB-48C4-9F22-7982EC4B6AD5 --maxbuckets=0 --ttl=60 --maxout=0 --maxtime=0 --lookups=1 --streaming --s
idtype=normal --outCsv=true --acceptSrsLevel=1 --user=gots --pro --roles=admin:power:user

Regex JIT enabled

RE2 regex engine enabled

using CLOCK_MONOTONIC
Preforked process=0/59436: process_runtime_msec=54987, search=0/188869, search_runtime_msec=1240, new_user=Y, export_search=Y, args_size=356, completed_searches=3, user_changes=2, cache_rotations=3

Thread: "BatchSearch", did_join=0, ready_to_run=Y, main_thread=N
First 8 bytes of Thread token @0x7fc9c70845e8:
00000000  00 e7 1f c0 c9 7f 00 00                           |........|
00000008

SearchExecutor Thread ID: 1
Search Result Work Unit Queue: 0x7fc9b77ff000
Search Result Work Unit Queue Crash Reporting
Type: NON_REDISTRIBUTE
Number of Active Pipelines: 2
Max Count of Results: 0
Current Results Count: 352
Queue Current Size: 0
Queue Max Size: 200, Queue Drain Size: 120
FoundLast=N
Terminate=N
Total Bucket Finished: 1.0012863152522, Total Bucket Count: 16

===============Search Processor Information===============
Search Processor: "lookup"
type="SP_STREAM"
search_string=" networklist_allocs_all network AS dest_ip  "
normalized_search_string="networklist_allocs_all network AS dest_ip"
litsearch="networklist_allocs_all network AS dest_ip  "
raw_search_string_set=1 raw_search_string=" networklist_allocs_all network AS dest_ip  "
args={StringArg: {string="networklist_allocs_all" raw="networklist_allocs_all" quoted=0 parsed=1 isopt=0 optname="" optval=""},StringArg: {string="network" raw="network" quoted=0 parsed=1 isopt=0 optname="" optval=""},StringArg: {string="AS" raw="AS"
 quoted=0 parsed=1 isopt=0 optname="" optval=""},StringArg: {string="dest_ip" raw="dest_ip" quoted=0 parsed=1 isopt=0 optname="" optval=""}}
input_count=885
output_count=0
directive_args=
==========================================================
Labels (1)

vgtk4431
Path Finder

Hi,

 

did you solve your issue ? If so how ? I'm encountering the same behavior

0 Karma

gots
Path Finder

Unfortunately, no.

Workaround is to move lookup to kvstore. But it is strange solution for 4 lines csv lookup.

 

0 Karma

koshyk
Super Champion

my suggestion in these type of issues is to "generate diag" and raise case with Splunk

0 Karma

gots
Path Finder

Thank you for your support and advises.
Unfortunately it is not possible to raise case with Splunk.

Ok, we will try to debug ourselves.

0 Karma

harsmarvania57
SplunkTrust
SplunkTrust

Dumb suggestion but can you please try

| lookup networklist_allocs_all network AS src_ip OUTPUT descr

And I'll suggest to raise case with splunk.

0 Karma

nickhills
Ultra Champion

I have had lookups cause odd behaviour when they contain unescaped control characters or extra commas.
Can you open the csv in a text editor or spreadsheet to verify there are no hidden surprises?

If my comment helps, please give it a thumbs up!
0 Karma

gots
Path Finder

I had checked lookup file in hex-editor and there is no strange symbols in the lookup file.
I can see only dos-like 0xD0xA at the end of the each line. Ok i will try to remove them.

But it is strange, that splunk crashing not always but sometimes... so i thing that problem not in file, but in:
1. CIDR function from lookup definition OR
2. Memory management in new splunk version OR
3. Threads management in new splunk version

0 Karma

13tsavage
Communicator

Lets try and figure out if it is a search issue or a lookup file issue. Can you return the lookup file by using inputlookup?

| inputlookup networklist_allocs_all

This will return the full lookup file in the format you showed.
network,descr
192.168.0.0/24,network_name

If you get results, then your lookup file is fine. If you do not, then there is something wrong with this lookup file and needs further investigation.

0 Karma

gots
Path Finder

Command
| inputlookup networklist_allocs_all
gives me exactly 31 rows like in file (besides headers and new line at the end of file) with identical data.

As i noticed in question - we nothing change in this lookup last some month. All changes recorded on CVS (bitbucket).
Crash*.log files appeared after upgrade from 7x splunk version to 8.0.1.
In last 8.0.2 release we did't find any relevant fixes.

0 Karma

13tsavage
Communicator

The upgrade from 7.0x to 8.x was on the indexers, and the rest of your environment was upgraded as well?

0 Karma

gots
Path Finder

Upgrade was accomplished in one day by instruction https://docs.splunk.com/Documentation/Splunk/8.0.1/Installation/HowtoupgradeSplunk,
so today all instances in cluster upgraded to 8.0.1

0 Karma

vgtk4431
Path Finder

we had a issue very similar, upgrading to 8.1.2 solved it

 

you might try to upgrade to at least 8.0.7 :

 

 Searches crash with segmentation fault, backtrace contains _ZN14UnpackedResult8addMatchEP15SearchResultMemPK22LookupProcessorOptionsP19ILookupMatchFunctorPK16LookupDefinition
0 Karma

Janssen135
Loves-to-Learn

I have the same issue in version 8.1.6.  Any better solution for this issue?

0 Karma
Get Updates on the Splunk Community!

Index This | I am a number, but when you add ‘G’ to me, I go away. What number am I?

March 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

What’s New in Splunk App for PCI Compliance 5.3.1?

The Splunk App for PCI Compliance allows customers to extend the power of their existing Splunk solution with ...

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...