Getting Data In

Data validation across multiple deployments

mbasharat
Builder

Hi,

We have old Splunk architecture which we will be retiring. New architecture is in place. We have configured data feeds to currently going into both architectures in parallel until all validations are complete. Right now, we need to validate if data is same in both deployments e.g. Deployment A (old) and B (new) for all data sources.

I need guidance in the right steps and validations to make sure of this. I am thinking doing event counts for each data source, data size, events, event's structure, fields etc. Can someone guide in outlaying this if I am missing something pls?

Deployment A is on version 6.6.5 and new deployment B is on version 7.x.

Thanks in-advance.

Tags (1)
0 Karma
1 Solution

amitm05
Builder

Out of curiosity, I'd want to know that why did you setup a new deployment rather than upgrading your existing one ? There must be a good reason for why one would increase the task level manifolds.

Now for your answer you can perform some of the following validations -
1. Compare licensing volumes. Overall and Per sourcetypes.
2. Compare knowledge objects e.g. Fields, Tags, Event Types, DMs, Lookups, Macros etc.
3. DM accelerations. These would be important if any of your searches are based on them.
4. Users and Roles
5. Any external Authentications that you might be using.
6. Apps and Addons. [It'll be important to take note about if there are any customizations done on the Apps and Addons you are already using]. May be keeping copy of /local directory of these apps and addons would help.

Now you can choose about how much thorough you want to be while checking on these pointers above

View solution in original post

amitm05
Builder

Out of curiosity, I'd want to know that why did you setup a new deployment rather than upgrading your existing one ? There must be a good reason for why one would increase the task level manifolds.

Now for your answer you can perform some of the following validations -
1. Compare licensing volumes. Overall and Per sourcetypes.
2. Compare knowledge objects e.g. Fields, Tags, Event Types, DMs, Lookups, Macros etc.
3. DM accelerations. These would be important if any of your searches are based on them.
4. Users and Roles
5. Any external Authentications that you might be using.
6. Apps and Addons. [It'll be important to take note about if there are any customizations done on the Apps and Addons you are already using]. May be keeping copy of /local directory of these apps and addons would help.

Now you can choose about how much thorough you want to be while checking on these pointers above

amitm05
Builder

Please accept as answer, if this responds to your query. Thanks.

0 Karma
Get Updates on the Splunk Community!

Industry Solutions for Supply Chain and OT, Amazon Use Cases, Plus More New Articles ...

Splunk Lantern is a Splunk customer success center that provides advice from Splunk experts on valuable data ...

Enterprise Security Content Update (ESCU) | New Releases

In November, the Splunk Threat Research Team had one release of new security content via the Enterprise ...

Index This | Divide 100 by half. What do you get?

November 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with this ...