I have two search heads in a cluster. SH-A is locked down and is only used by certain staff. SH-B is open to others. SH-A updates a lookup table several times a day, triggered by search results and not based on a schedule. How can I replicate only this lookup table to SH-B? I don't want to replicate any other search app data.
I've tried using the REST API to simply upload the lookup table from SH-A to SH-B using a script but looks like this is impossible.
I assume you mean the servers are unclustered, otherwise they would already be sharing lookups and authentication configuration..
In any case, it IS possible to sync a lookup between search heads. Please read this article: https://www.splunk.com/blog/2017/06/08/syncing-lookups-using-pure-spl.html
All the best! Chris.
@chrisyoungerjds - the article requires that you add SH-A, the locked-down server, as a search peer to SH-B. This opens up the entire search head to SH-B. Per the post:
"How can I replicate only this lookup table to SH-B? I don't want to replicate any other search app data."
Also- I was interested in pushing the lookup from SH-A to SH-B upon changes (using a script for example). The article uses a scheduled search in SH-B to sync the SH-A lookup table, say every 10 minutes. This isn't efficient as
"SH-A updates a lookup table several times a day, triggered by search results and not based on a schedule".
Fair enough. Well I guess your best option would be to write a small modular input or script external to splunk that constantly checks the lookup file and does a
scp to server-B if has changed. Should be pretty simple.
Do you know if there is a way to trigger a refresh of the lookup file from a script or the REST API, without refreshing the entire environment (i.e. debug->refresh)? seems inefficient to constantly check the lookup file for changes.