I have a quite big csv file (~20Mb) and I changed the
100Mb in my
My searches using that lookup table are really really slow. How can I check if Splunk internally indexed those informations or still searching into the lookup table?
If you can look at the filesystem, you'll see a directory that is the filename of your base lookup, with ".index" appended. Example: hosts.csv -> hosts.csv.index. This directory will contain .tsidx files (and some other data) which represent the "indexed" version of the lookup table. Since you indicated that your file is 20M and you raised
maxmemtablebytes to 100M, then you've actually raised the limit at which Splunk would index the lookup. That is, it's still using an in-memory table, and not an indexed version of the lookup. You might consider reverting the setting of
maxmemtablebytes to allow Splunk to index the lookup table.
And, to be clear, it's not writing the data to an index, rather, it's applying the indexing technology to the lookup table "in place".
In general the path where splunk reads in the entire lookup is much faster than the index->access-on-disk path. So usually raising maxmemtablebytes will cause the search to run faster, at the expense of more ram.
As sowings points out, we don't index when we read the values into memory, we just use them in ram without a special indexing step.
If raising maxmemtablebytes makes things run slower, it may be that your system is beginning to swap.
If I change my memory limit under the file size Splunk rise this error:
Error 'Error using lookup table '$tablename': CIDR and wildcard matching is restricted to lookup files under the in-memory size limit.' for conf '$lookupname' and lookup table '$lookuptable', so I have to increase maxmemtable_bytes!
I came across the same issue and the first thing I did was also increasing limit to literally 100MB, it's probably quite obvious as value has to be integer but what fixed the problem was setting it to
maxmemtablebytes = 100000000
instead of 100MB