Splunk Search

Why do I see old data in my lookup table in a search head cluster?

laytonj76
Explorer

I have a lookup file in a particular app that I use to enrich data from a particular index. This file, lookup_file.csv for example, changes each month, so we make those changes as necessary and upload a new file via the GUI.

Recently, I noticed that the lookup command does not return all results in search; whereas the inputlookup command does. I know the two function differently and there could be a legitimate explanation (i.e. lookup could be looking at data in memory vs. on disk); however, I'm not sure how to force a fresh read of the file.

The file has changed over time to exceed the default 10MB max_memtable_bytes in limits.conf. I assume this means that the file will no longer be stored in memory, but rather indexed. However, I am not seeing the .index directory I would expect if that were the case. Additionally, I can substantially change the contents of the file (i.e. I removed 100k of the 116k rows) and still get the same results returned as before. Also, oddly enough, if I remove the file altogether, Splunk returns an error.

I assume all of this means that there is a reference stored in Splunk somewhere to old data and not refreshing when we make our changes. I'm not sure where that is or how I would refresh that reference. I have restarted splunkd AND rebooted the search head, assuming that the in-memory reference would refresh or the file would be indexed. Unfortunately, neither has happened. Any thoughts on what could be happening? Any references that may help me understand what's going on?

Lastly, if it's of consequence, the file is a lookup stored in the user_dashboards app ... on a SH cluster.

0 Karma

laytonj76
Explorer

Thanks to sel105, my coworker referenced below, for reaching out to our Splunk rep to help identify our particular issue.

We have a SH cluster integrated with an Index cluster. What we learned is that for searches to execute across our search peers (i.e. our index cluster), the Search Heads must send a bundle containing the knowledge objects. As the knowledge objects change, new bundles are pushed out to the search peers. When we executed our query our search peers, not our Search Heads, were referencing old lookup files. A trick shared by the Splunk rep to determine whether the local lookup file returned something different was to add "local=t" option to the lookup command. When we did that, we got our expected results which confirmed that the local copy was correct and the search peers had an old version.

I'm still doing some research, but since the changes were made locally on one of our SHs and since we don't have SH replicated configured completely, that change was not propagated across our SH cluster. Since the change did not make it to our Captain, which I'm surmising determines when to send updated bundles to search peers, the knowledge object changes were not pushed to the search peers. The short term fix was to put the correct copy on the SH Cluster's elected captain and the change was propagated to the search peers. Our long term objective will be to identify what we want replicated across our SH cluster and create a whitelist to facilitate such.

This seems to have solved our immediate problem; however, if there is anything I've misstated, please chime in.

jkat54
SplunkTrust
SplunkTrust

see my comments on my answer below. If you believe adding local=t is the correct solution, then mark your answer as correct. I would argue this is the incorrect way to do it based on your scenario though. Instead you should be storing the lookup in an app, deploying that app, etc. OR just indexing the CSV and using join, subsearch, append, appendcols, etc.

0 Karma

jkat54
SplunkTrust
SplunkTrust

I really like the title of your question. It caught my attention!

I read your story and what strikes me as odd is that you're using this as a lookup. Can you just index the file every month instead?

When it's indexed you'll be able to use all the other "join'ish" commands like join, append, subsearches, etc. it's a lot cooler than passing a 10+mb knowledge object around in the search bundle. IMHO, YMMV

0 Karma

sel105
New Member

@jkat54: Hi I work with Layton and wanted to chime in here. We actually were keeping old (and bad) data in the file that we shouldn't have been..... so it should always stay around 6MB. So we won't run into this problem again....if we can just figure out how to get splunk to look at the new (6MB) file instead of the version of the lookup that it already indexed. Any thoughts??

0 Karma

jkat54
SplunkTrust
SplunkTrust

Splunk doesnt index the lookups. Instead it stores it a local folder.

An easy way to find lookups is to search for csv files under the splunk dir.

find /path/to/splunk/ -type f -name "*.csv"

My guess is you have this lookup file in more than one location, and when you're doing your search, the copy of the lookup in the app (aka app context) that you're searching within is the old version. Then when you're uploading the new you're again putting it in a different app.

So make sure there arent multiple copies on the filesystem, and if there are... remove/replace the old copies.

My second guess is that the permissions arent right either role based access control on the splunk end, or file permissions on the windows/linux end.

Again, you still might consider indexing these if you use them often.

Also you might want to check out tscollect, and summary indexing for best performance.

0 Karma

laytonj76
Explorer

jkat54, thanks for the input; very much appreciated.

Your recommendations were right in line with what we were thinking early on as well. However, when we found that there was only one copy on the entire server we were at a loss. I just added a comment that explains what we had to do that solved our problem in the near term. We have some long term updates to make, but we now understand the root cause.

As for the lookups being 'indexed', Splunk actually will create an index but not in the way it would a conventional index. Splunk actually will create a ".index" directory in the same directory as the lookup which contains the tsidx file and a few other things. Here's a link to a question here in Splunk Answers.

Thanks again for the input; provided some food for thought.

0 Karma

jkat54
SplunkTrust
SplunkTrust

That's fine that splunk creates a .index with tsidx files when you use a lookup... however if you just used inputs.conf to ingest the csv file... or to monitor a folder waiting for the csv file to arrive, and then you've indexed the csv with proper field extractions... then you wont need a lookup command... you can use appendcols, subsearches, etc.

All you have to do in the subsearch is use _index_earliest=-30d so it only looks at the csv that was indexed in the last 30days, etc.

Finally, yes you've learned about the search bundle and how its being replicated across your peers. So go put the correct lookup on your peers too, stop just uploading to the "captain"... and why arent you using search head deployer to deploy the lookup in an app? You shouldnt be uploading a lookup to the master and expecting the master to replicate the lookup. You should be putting the lookup in an application that is deployed to the SHC & INDEXERS, then updating that lookup file and redeploying to the search head cluster every time you update it. Which means you need a copy of the app on the search head deployer and the deployment server. Which means you have to update both copies on two servers and deploy the app every time you change it.

And again, you're changing the lookup every month... so in my humble opinion you should be indexing it instead. Think of how many minutes of your life will be lost to logging in and uploading to different servers, executing deployment commands, verifying the lookups work, etc. When it could be as simple as making sure the file arrives in a directory every month instead.

It sounds like you need to study the following:
Deployment Servers
Search Head Deployers
Knowledge Object Permissions (aka global, app, private)
http://docs.splunk.com/Documentation/Splunk/6.2.0/Knowledge/Manageknowledgeobjectpermissions
Configuration File Precedence http://docs.splunk.com/Documentation/Splunk/6.1/admin/Wheretofindtheconfigurationfiles

0 Karma

laytonj76
Explorer
0 Karma
Get Updates on the Splunk Community!

Introducing Splunk Enterprise 9.2

WATCH HERE! Watch this Tech Talk to learn about the latest features and enhancements shipped in the new Splunk ...

Adoption of RUM and APM at Splunk

    Unleash the power of Splunk Observability   Watch Now In this can't miss Tech Talk! The Splunk Growth ...

Routing logs with Splunk OTel Collector for Kubernetes

The Splunk Distribution of the OpenTelemetry (OTel) Collector is a product that provides a way to ingest ...