We have a lookup table that is automatically updated every 15 minutes past the hour with external results (not in splunk). This needs to be pushed out to our clustered search heads members. How would be the best way to configure this?
My understanding is you can't just manually add a lookup to an app on the search heads individually as they don't appear to be able to see it. Instead, you have to run the cron to update the lookup on them master/deployer and then push & restart the entire search head cluster EVERY hour.
Hopefully there is a better solution to this. Thanks!
You could make a custom search command to get your external data then pipe to outputlookup. Schedule that search. The cluster will then replicate that updated lookup across members when it is refreshed.
This is the inevitable conclusion I came to as well. Sad they don't have any way to do this, makes it a pain to have to convert all our previously working crontabs to splunk commands just to get the cluster to "see" the new lookups.
You can put a monitor on the lookup file generated by crontab and then a scheduled search can "build" the new lookup by referencing the data collected by the monitor.
The other advantage of this is you can ensure all entries remain in the file in the event that your crontab fails for some reason. (by using a range greater than an hour and dedup)
Even better you could monitor a change file every hour and then build the lookup (or kvstore) based on the old lookup plus the changes:
|inputlookup lookup.csv |append [ search "find the new data collected by change file" ]|sort - time |dedup keyto_file | outputlookup lookup.csv