Splunk Search

Is my (big) csv lookup file indexed in memory by Splunk?

RiccardoV
Communicator

Hi,
I have a quite big csv file (~20Mb) and I changed the max_memtable_bytes to 100Mb in my limits.conf file.
My searches using that lookup table are really really slow. How can I check if Splunk internally indexed those informations or still searching into the lookup table?

thanks!

1 Solution

sowings
Splunk Employee
Splunk Employee

If you can look at the filesystem, you'll see a directory that is the filename of your base lookup, with ".index" appended. Example: hosts.csv -> hosts.csv.index. This directory will contain .tsidx files (and some other data) which represent the "indexed" version of the lookup table. Since you indicated that your file is 20M and you raised max_memtable_bytes to 100M, then you've actually raised the limit at which Splunk would index the lookup. That is, it's still using an in-memory table, and not an indexed version of the lookup. You might consider reverting the setting of max_memtable_bytes to allow Splunk to index the lookup table.

And, to be clear, it's not writing the data to an index, rather, it's applying the indexing technology to the lookup table "in place".

View solution in original post

jankowsr
Path Finder

I came across the same issue and the first thing I did was also increasing limit to literally 100MB, it's probably quite obvious as value has to be integer but what fixed the problem was setting it to
max_memtable_bytes = 100000000
instead of 100MB

0 Karma

sowings
Splunk Employee
Splunk Employee

If you can look at the filesystem, you'll see a directory that is the filename of your base lookup, with ".index" appended. Example: hosts.csv -> hosts.csv.index. This directory will contain .tsidx files (and some other data) which represent the "indexed" version of the lookup table. Since you indicated that your file is 20M and you raised max_memtable_bytes to 100M, then you've actually raised the limit at which Splunk would index the lookup. That is, it's still using an in-memory table, and not an indexed version of the lookup. You might consider reverting the setting of max_memtable_bytes to allow Splunk to index the lookup table.

And, to be clear, it's not writing the data to an index, rather, it's applying the indexing technology to the lookup table "in place".

RiccardoV
Communicator

If I change my memory limit under the file size Splunk rise this error:

Error 'Error using lookup table '$table_name': CIDR and wildcard matching is restricted to lookup files under the in-memory size limit.' for conf '$lookup_name' and lookup table '$lookup_table', so I have to increase max_memtable_bytes!

0 Karma

jrodman
Splunk Employee
Splunk Employee

In general the path where splunk reads in the entire lookup is much faster than the index->access-on-disk path. So usually raising max_memtable_bytes will cause the search to run faster, at the expense of more ram.

As sowings points out, we don't index when we read the values into memory, we just use them in ram without a special indexing step.

If raising max_memtable_bytes makes things run slower, it may be that your system is beginning to swap.

0 Karma
Get Updates on the Splunk Community!

Welcome to the Splunk Community!

(view in My Videos) We're so glad you're here! The Splunk Community is place to connect, learn, give back, and ...

Tech Talk | Elevating Digital Service Excellence: The Synergy of Splunk RUM & APM

Elevating Digital Service Excellence: The Synergy of Real User Monitoring and Application Performance ...

Adoption of RUM and APM at Splunk

    Unleash the power of Splunk Observability   Watch Now In this can't miss Tech Talk! The Splunk Growth ...