We are facing an issue with CSV lookup files after migrating from standalone search head to a cluster. The lookups are edited using the lookup editor. During deployment of a new app/object via the deployer, the lookup files in the search heads role back to the version in the deployer. Is this an expected behaviour? If yes, what is the mechanism to sync the lookups edited by lookup editor with deployer?
Thanks for all the help, the lookups are completely handled by lookup editor.
The initial lookup files in the deployer are cleaned.
The dynamic lookup's from external system are ftp to deployer and pushed to search head members by the deployer.
You are having 2 sources to modify the lookup files, an external system and users modifying via lookup Editor. So, both of them needs to be captured and consolidated at 'deployer' before pushing the final/sanitised versions to the SHC.
Another option would be to have 2 lookup files, say lookup_shc_static.csv, which stories changes from lookup editor on to SHCm and another lookup_ftp_dynamic.csv. Then you can have another lookup file, lookup_final.csv , which is generated by merging both of them in SHC using append lookup using a scheduled search, may be or a one of operation after deployment.
Thank you for the suggestion. These are 2 different lookups, meaning lookup_x.csv is uploaded by external application in APP A and lookup_y.csv is edited by lookup editor in APP B.
I agree to the first solution to capture & consolidate at deployer before pushing to SHC. But it looks more like a workaround than a clean solution. This is why I am looking for an alternative.
A few options spring to mind:
1.) You could move your (semi)static lookups, (i.e. the ones users are editing on your searchheads) to a KV Store collection, in that way they can still use the lookup editor, but they are no longer editing csv files, so they will not be replaced when the deployer pushes the updated app. - This also keeps your existing ftp logic unchanged.
2.) You could move your (semi)static lookups to a separate app, thereby excluding them from the csv updates to your 'other' app.
Whilst you need to initially push the 'lookup app' from the deployer, it will not be pushed again as long as no changes have been made to the version on the deployer, leaving your local changes intact.
Thank you. I will test the first solution.
The 2nd solution is not clear to me. The scenario is as below,
There are 2 different lookups, meaning lookup_x.csv is uploaded by external application in APP A and lookup_y.csv is edited by lookup editor in APP B. Both these are not static, both could be changed everyday.
If I understand:
External App (updated CSV) --[ftp]--> Deployer --[shc bundle]--> SearchHeads
Because this changes frequently you update/push the shc bundle daily.
But you have a 'static' lookup in APP B?
In theory - if your users make a change (via the Lookup editor) on the SHC members, and you run an
apply shcluster-bundle, the local changes to the CSV should NOT be overwritten as long as NO changes have been made to the app on the Deployer. See: https://docs.splunk.com/Documentation/Splunk/7.2.4/DistSearch/PropagateSHCconfigurationchanges#What_...
I agree to the above points, can you help to understand how a dynamic lookup file can be handled. Dynamic as in the csv files that are generated by an external application and pushed to splunk (ftp).
Currently these files are pushed to deployer and it is applied to search head members. This corrupts the static lookups in search head members that are updated via lookup editor.
One solution is to push the csv's to all the search head members. Other is to copy the (semi)static lookups from search head members to deployer before deploying anything from deployer. But both do not look like a clean solution, looks more like a workaround than a solution.
Kindly help to know if there is a better way to achieve this.
It's the expected behavior. You can deploy lookup files via the deployer for lookups that are static but dynamic lookups shouldn't be deployed via the deployer. It's a bit of a confusing concept.
If we look under
$SPLUNK_HOME/etc/apps/<an app> in the search head, we see
lookups, meaning, the lookups are grouped together and unlike other knowledge objects they are all together.
The search head cluster will make sure that the edited lookup is synced across all search heads - no matter which one you edit on.
To make sure you don't obliterate the changes when you push out apps from the deployer you should use "-preserve-lookups true":
splunk apply shcluster-bundle -target <URI>:<management_port> -preserve-lookups true -auth <username>:<password>
That makes the deployer ignore lookups on the deployer that already exist on the search heads.