Deployment Architecture

Does anyone know why /opt/splunk/var/run/splunk/lookup_tmp would fill up to 65GB on a search head?

tkw03
Communicator

Does anyone know why /opt/splunk/var/run/splunk/lookup_tmp would fill up to 65GB on a search head?

How would I clean that? What would the impact of cleaning that be?

Thanks for the feedback!

0 Karma
1 Solution

tkw03
Communicator

Found this was an issue with replication. For some reason this sh cluster member was out of sync and ran away on this member. The reason it kept filling is that all of the updates to the lookup got queued so very time the directory was cleared the updates came back. Simple restart of the member and a manual resync fixed the issue.

View solution in original post

tkw03
Communicator

Found this was an issue with replication. For some reason this sh cluster member was out of sync and ran away on this member. The reason it kept filling is that all of the updates to the lookup got queued so very time the directory was cleared the updates came back. Simple restart of the member and a manual resync fixed the issue.

richgalloway
SplunkTrust
SplunkTrust

@tkw03 If your problem is resolved, please accept the answer to help future readers.

---
If this reply helps you, Karma would be appreciated.
0 Karma

tkw03
Communicator

I had to wait for moderation before I could

0 Karma

BainM
Communicator

Can you please add some more detail? Is this a SHC or standalone? Do you use a large number of lookup files? if so, are you constantly updating them?

0 Karma

tkw03
Communicator

Its a member of a search head cluster and yeah we do use a fair amount of lookups.

So I got some more info. The cause for this was lookup file updates, hundreds a second apparently. I stopped the lookup files saved searches for now until I can get them tuned but the problem now is that the file path is still filling AFTER the lookups saved searches were stopped, yesterday. I removed the dispatch files on this host and removed the files in lookup_tmp but they keep refilling the filesystem path.

I checked replications settings in distsearch.conf and I THINK its not replicated because of:

  [replicationBlacklist]
    lookupindexfiles = (system|apps/*|users(/_reserved)?/*/*)/lookups/*.(tmp$|index($|/...))

Not sure whats going on. I'd roll this host but Im afraid it might kick the issue to a different server or do nothing at all

0 Karma

tkw03
Communicator

After researching I have found what I believe to be the issue. I think this host is out of sync with the rest of the cluster and this ballooned the replication. Will resync this on Monday evening and report back

0 Karma
Get Updates on the Splunk Community!

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...

New in Observability Cloud - Explicit Bucket Histograms

Splunk introduces native support for histograms as a metric data type within Observability Cloud with Explicit ...