We have a dev environment with ITSI implementation where we develop dashboards and test them.
Now the tested dashboards are moved to production, currently I have to move a couple of glass tables with respective IDs.
Thanks in advance.
Almost everything in ITSI is saved in the kv-store so backing up your work and moving it to another instance is as easy as taking a backup and applying it to your production environment.
Configure>Backup and Restore>Create a New Backup
I'm looking to copy just one glass table.
Is it possible to migrate just one glass table?
Whats the purpose of having 2 independent ITSI search heads? I understand the need for wanting to segregate your development and production environments, but your doubling up the admin work and can use that extra search head to increase your cluster size which can increase the number of concurrent searches. This will be very important later down the road when you increase the number of services and KPI's.
In my opinion, the best solution would be to have a single ITSI environment, create objects privately then promote them in the app view or globally when ready for production.
Environment is quite huge and the architects choose to keep everything separate. Else no reason! Your thought is quite convincing and logical.
Take a look at the kvstore_to_json.py CLI options... https://docs.splunk.com/Documentation/ITSI/3.0.2/Configure/kvstore_to_json.pyoperations
This won't directly answer your question, but it is another approach to going from dev/test to prod in ITSI. We created a test team and allow people to build out new ITSI knowledge objects/services/kpis there. Then admins can change the team to global for new stuff when it is baked.
This can get very dangerous when scaling. In my experience, when allowing anyone to create services, people typically go the easy route and create adhoc searches which run on a 1-5 minute schedule then never look at it again. This can also bog down the rest of the Splunk environment if the indexers are busy handling many ITSI requests. It's important to limit who can create these scheduled searches to preserve CPU
Point taken. In this case, it is more about letting people build glass tables, service analyzers etc with existing Services/KPIs. Will keep an eye on new adhoc searches.