Splunk Enterprise Security is deployed to a Search Head Cluster, along with a bunch of applicable TAs. Deployments are pushed via the deployer (always using the "-preserve-lookups true" option to keep the trackers and other such CSV files from being overwritten during each deployment.)
So when it's necessary to add or modify a record in an existing CSV lookup table, there's no great option. You get to choose between (1) overwriting ALL lookups, not a great option in a production environment, (2) Not getting your changes deployed. Neither of these options are acceptable.
The extremely inflexible binary choice imposed by
-preserve-lookups when deploying apps to a search head cluster is super frustrating (especially, when dealing with something as complex as ES.) After upgrading to Splunk 6.5, the really ugly (yet quite simple) "hack" of just deleting the lookup file (via the Web UI) before running the "apply shcluster-bundle" command NO longer works. (I haven't been able to figure out why...)
I have an automated deployment mechanism in place, so I was considering putting together my own script to (1) upload a new temporary lookup to a SHC node, (2) running the silly
| importlookup <MY_LOOKUP-TEMP-RANDOM_STRING> | outputloookup <MY_LOOKUP> search command, (3) removing the temp lookup. But of course that won't work ether because Splunk doesn't support uploading lookup file via the REST API. (So I may be forced to automate via Web UI endpoints, which is problematic on multiple levels)
Right now my manual work around is this:
(1) Copy the lookup file to a new (temp) name.
(2) Run the "apply shcluster-bundle -preserve-lookups true"
(3) Login to the Web UI.
(4) Run the silly:
|inputlookup temp.csv | outpulookup real.csv search
(5) Manually delete the temporary lookup table using the UI.
(6) Go delete the temporary lookup file from the filesystem on the deployer instance.
That seems like a whole lot of silly steps just to add one line to a lookup table. What am I missing?
@Lowell, do you have a better approach now or are you still doing the same thing?
Also, for step #1, I assume that you are copying it into a different app than that which you are deploying, right? In other words, if the lookup file is not in the deployed bundle at all (you created it fresh with a temporary name), does it still get "overwritten" (deleted) with
apply shcluster-bundle -preserve-lookups true?
If so, wouldn't it make more sense to just remove the lookup file
real.csv from the app and then it won't get overwritten?
SH clustering is such a pain. For me the greatest pain is the configs from deployer goes into "default" and UI updates from SH-member goes to "local". I have no clue how to version control such updates back into the git repo !!
Yeah, I know what you mean. I'm looking at a similar problem and trying to figure out object promotion pathways. If you can do all your work in a dev environment and prohibit changes in production, that helps a bunch. (Assuming DEV is a non-SHC instance, lol.) However that's just not realistic. My current approach (not yet field tested) is to create 2 classes of objects: fully maintained (supported by the local Splunk admins/devs) which are version controlled, and user-created content which is "unofficial" and untracked. The first class would always be shared, and the second may be shared at the app level (preferably with just a small group a users). This means a few things must be done by the promoter (1) rename and cleanup objects when they become official, (2) make any such objects read-only (users can still copy a dashboard, for example, but under a different name), and (3) all official objects are pulled into git and changes must always be made via git (and therefore pushed via deployer) rather than being made within the live system.
outputlookup is SHC aware,
If you can run a search, that finds the data you want and |outputlookup to your csv, this csv will replicate within the SHC.
An example of this would be like here:
Getting identities from AD this would never have to touch the deployer.
You can also do other fancy tricks with this EG
inputlookup blah |search first!=john |ouputlookup blah
This would find buck and fido and rewrite the csv omitting john, and because it is outputlookup the results would replicate within your shc to all nodes.
Not a perfect solution but if the results can be found in search it prevents you from having to push bundles from the deployer.
Yeah, I've been using tricks like that to "manually edit" the lookup tables to match the lookups that I maintain on the deployer. It's a serious pain. Especially if you have to add a row.
| inputlookup blah | append [ stats count | fields - count | eval first="harvey", last="thewonderhamster" ] | outputlookup blah
Thanks for the feedback
One other stop gap idea if you dont like kvstore. Make a custom search command that fetches the 'current' table from another source. Database, scp's it, whatever. presents as search results and have that outputlookup. Schedule it as desired. That would leverage the SHC handling the inter node replication.
This is the unfortunate way it works. I would recommend logging a Splunk support ticket enhancement request to add specific overrides to -preserve-lookups behavior. You won't be the first one and it will act to increase the number of people requesting it.
If you are talking your own lookups not stock ones from one of the TAs, then you can do KVStore based ones which can be updated via rest api.