We are storing data in a Splunk lookup file on one of the forwarders.
In our distributed Splunk architecture, this lookup data is not getting forwarded to the indexers or the search head, and therefore it is not available for search or enrichment.
How can we sync or transfer this lookup data from the forwarder to the search head (or indexers) so that it can be used across the distributed environment?
To use a lookup to enrich a search, the lookup needs to exist as a lookup on the Search Head
A lookup on a heavy forwarder is not going to be available at search time.
What you need to do is get a copy of the lookup on the SH.
The easiest (imo) option is to index the lookup file on the HF - simply define it as an input on the HF and have Splunk monitor it for changes. You can send this to any index, but lets assume you create and use one called "lookups_index" and sourcetype "my_hf_lookup"
On your search head, you can now create a lookup-generating search:
Depending on what your lookup contains (dates, product_ids, error codes) you would create a search like:
index=lookups_index soucetype=my_hf_lookup
|dedup product_code
|table product_code product_description product_price
|outputlookup my_sh_lookup.csv
I like to name these something like: "LOOKUPGEN-my_sh_lookup.csv"
You can then schedule that to run once a day/week/hour (depending on your anticipated lookup change frequency)
You can then use:
|lookup my_sh_lookup.csv product_code OUTPUT product_name product_price
In your searches
- Although I find it better practice to actually create a lookup definition and use that
We are storing data in a Splunk lookup file on one of the forwarders.
In our distributed Splunk architecture, this lookup data is not getting forwarded to the indexers or the search head, and therefore it is not available for search or enrichment.
How can we sync or transfer this lookup data from the forwarder to the search head (or indexers) so that it can be used across the distributed environment?
Hi @gurunagasimha ,
why did you create two identical questions?
Anyway, see my answer in the question https://community.splunk.com/t5/Splunk-Dev/Splunk-Distributed-Architecture/m-p/749491#M11993
Ciao.
Giuseppe
Merged both threads.
Lookup format is kvstore. We are ingesting the data through scripts and storing it in lookups in the Splunk forwarder. We are using a heavy forwarder.
Is there any other way to automatically sync in the lookups?
Lookup files on forwarders are not automatically forwarded from forwarders to indexers or search heads. To make lookup data available across your distributed environment you would need to send it somehow to your Search Head (Cluster) - there are a number of ways you could do this:
1) On your HF run a scheduled search using | inputlookup to load the contents of the lookup and then use the | collect command to write the contents to an index, on your SH/SHC you can create a scheduled search to load the indexed data and use | outputlookup to write it to a lookup
2) Use a custom REST API script to copy the kvstore lookup from your HF to your SH/SHC.
3) Use the KV Store Tools Redux app (https://splunkbase.splunk.com/app/5328) to upload from the HF to SHC
🌟 Did this answer help you? If so, please consider:
Your feedback encourages the volunteers in this community to continue contributing
Hi @gurunagasimha ,
the lookup, I suppose, comes from a csv or txt file, so read this file on the Forwarder and store it on an index or a lookup on the Indexers o Search Head so yu can use it.
How to do it: create on the Forwarder a file input that reads the csv file.
Ciao.
Giuseppe