Splunk Enterprise Security

How do you update lookups on a SHC while running Splunk Enterprise Security?

Lowell
Super Champion

Splunk Enterprise Security is deployed to a Search Head Cluster, along with a bunch of applicable TAs. Deployments are pushed via the deployer (always using the "-preserve-lookups true" option to keep the trackers and other such CSV files from being overwritten during each deployment.)

So when it's necessary to add or modify a record in an existing CSV lookup table, there's no great option. You get to choose between (1) overwriting ALL lookups, not a great option in a production environment, (2) Not getting your changes deployed. Neither of these options are acceptable.

The extremely inflexible binary choice imposed by -preserve-lookups when deploying apps to a search head cluster is super frustrating (especially, when dealing with something as complex as ES.) After upgrading to Splunk 6.5, the really ugly (yet quite simple) "hack" of just deleting the lookup file (via the Web UI) before running the "apply shcluster-bundle" command NO longer works. (I haven't been able to figure out why...)

I have an automated deployment mechanism in place, so I was considering putting together my own script to (1) upload a new temporary lookup to a SHC node, (2) running the silly | importlookup <MY_LOOKUP-TEMP-RANDOM_STRING> | outputloookup <MY_LOOKUP> search command, (3) removing the temp lookup. But of course that won't work ether because Splunk doesn't support uploading lookup file via the REST API. (So I may be forced to automate via Web UI endpoints, which is problematic on multiple levels)

Right now my manual work around is this:
(1) Copy the lookup file to a new (temp) name.
(2) Run the "apply shcluster-bundle -preserve-lookups true"
(3) Login to the Web UI.
(4) Run the silly: |inputlookup temp.csv | outpulookup real.csv search
(5) Manually delete the temporary lookup table using the UI.
(6) Go delete the temporary lookup file from the filesystem on the deployer instance.

That seems like a whole lot of silly steps just to add one line to a lookup table. What am I missing?

Labels (2)

Amirahussein
Path Finder

Hi All,

 

Hope this will help after this long time 🙂

One of my team members found an easier approach to adjust lookup tables from the SHC side; he discovered a GUI application that would reflect the changes to the search members, and to modify the lookup from the SHC deployer, one may apply from the CLI without restarting.

0 Karma

thambisetty
SplunkTrust
SplunkTrust

Yes, this is really pain job updating lookups manually on search head members. 
At least lookups should be deployed from the deployer to search head members if the lookup doesn't exist on searh head members even if we use -preserve-lookups true 

may be we need to create an idea for Splunk to consider this build a solution around this issue. 

————————————
If this helps, give a like below.
0 Karma

woodcock
Esteemed Legend

@Lowell, do you have a better approach now or are you still doing the same thing?

Also, for step #1, I assume that you are copying it into a different app than that which you are deploying, right? In other words, if the lookup file is not in the deployed bundle at all (you created it fresh with a temporary name), does it still get "overwritten" (deleted) with apply shcluster-bundle -preserve-lookups true?

If so, wouldn't it make more sense to just remove the lookup file real.csv from the app and then it won't get overwritten?

0 Karma

koshyk
Super Champion

SH clustering is such a pain. For me the greatest pain is the configs from deployer goes into "default" and UI updates from SH-member goes to "local". I have no clue how to version control such updates back into the git repo !!

0 Karma

Lowell
Super Champion

Yeah, I know what you mean. I'm looking at a similar problem and trying to figure out object promotion pathways. If you can do all your work in a dev environment and prohibit changes in production, that helps a bunch. (Assuming DEV is a non-SHC instance, lol.) However that's just not realistic. My current approach (not yet field tested) is to create 2 classes of objects: fully maintained (supported by the local Splunk admins/devs) which are version controlled, and user-created content which is "unofficial" and untracked. The first class would always be shared, and the second may be shared at the app level (preferably with just a small group a users). This means a few things must be done by the promoter (1) rename and cleanup objects when they become official, (2) make any such objects read-only (users can still copy a dashboard, for example, but under a different name), and (3) all official objects are pulled into git and changes must always be made via git (and therefore pushed via deployer) rather than being made within the live system.

0 Karma

jwelch_splunk
Splunk Employee
Splunk Employee

outputlookup is SHC aware,

If you can run a search, that finds the data you want and |outputlookup to your csv, this csv will replicate within the SHC.

An example of this would be like here:
Getting identities from AD this would never have to touch the deployer.

http://docs.splunk.com/Documentation/ES/4.6.0/User/AssetandIdentityExamples

You can also do other fancy tricks with this EG
first, last
john, welch
buck, dually
fido, thedog
inputlookup blah |search first!=john |ouputlookup blah

This would find buck and fido and rewrite the csv omitting john, and because it is outputlookup the results would replicate within your shc to all nodes.

Not a perfect solution but if the results can be found in search it prevents you from having to push bundles from the deployer.

0 Karma

Lowell
Super Champion

Yeah, I've been using tricks like that to "manually edit" the lookup tables to match the lookups that I maintain on the deployer. It's a serious pain. Especially if you have to add a row.

| inputlookup blah | append [ stats count | fields - count | eval first="harvey", last="thewonderhamster" ] | outputlookup blah

Thanks for the feedback

0 Karma

Amirahussein
Path Finder

this worked correctly with me 🙂 without any need to restart SPLUNK service

0 Karma

Amirahussein
Path Finder

this works fine with me
thanks

0 Karma

starcher
Influencer

One other stop gap idea if you dont like kvstore. Make a custom search command that fetches the 'current' table from another source. Database, scp's it, whatever. presents as search results and have that outputlookup. Schedule it as desired. That would leverage the SHC handling the inter node replication.

0 Karma

Lowell
Super Champion

At the moment, I'm really just trying to deploy small changes to the lookups in various TAs. So it's not like I'm trying to sync the same file each time I make a change.

0 Karma

starcher
Influencer

This is the unfortunate way it works. I would recommend logging a Splunk support ticket enhancement request to add specific overrides to -preserve-lookups behavior. You won't be the first one and it will act to increase the number of people requesting it.

If you are talking your own lookups not stock ones from one of the TAs, then you can do KVStore based ones which can be updated via rest api.

0 Karma

Lowell
Super Champion

Done! I've also requested that this problem be listed on the "Known Issues" page as the impact is significant. Not only do I consider this a bug, but it's a bug that causes more bugs!

0 Karma
Get Updates on the Splunk Community!

Monitoring Amazon Elastic Kubernetes Service (EKS)

As we’ve seen, integrating Kubernetes environments with Splunk Observability Cloud is a quick and easy way to ...

Cloud Platform & Enterprise: Classic Dashboard Export Feature Deprecation

As of Splunk Cloud Platform 9.3.2408 and Splunk Enterprise 9.4, classic dashboard export features are now ...

Explore the Latest Educational Offerings from Splunk (November Releases)

At Splunk Education, we are committed to providing a robust learning experience for all users, regardless of ...