Getting Data In

Data validation across multiple deployments

mbasharat
Builder

Hi,

We have old Splunk architecture which we will be retiring. New architecture is in place. We have configured data feeds to currently going into both architectures in parallel until all validations are complete. Right now, we need to validate if data is same in both deployments e.g. Deployment A (old) and B (new) for all data sources.

I need guidance in the right steps and validations to make sure of this. I am thinking doing event counts for each data source, data size, events, event's structure, fields etc. Can someone guide in outlaying this if I am missing something pls?

Deployment A is on version 6.6.5 and new deployment B is on version 7.x.

Thanks in-advance.

Tags (1)
0 Karma
1 Solution

amitm05
Builder

Out of curiosity, I'd want to know that why did you setup a new deployment rather than upgrading your existing one ? There must be a good reason for why one would increase the task level manifolds.

Now for your answer you can perform some of the following validations -
1. Compare licensing volumes. Overall and Per sourcetypes.
2. Compare knowledge objects e.g. Fields, Tags, Event Types, DMs, Lookups, Macros etc.
3. DM accelerations. These would be important if any of your searches are based on them.
4. Users and Roles
5. Any external Authentications that you might be using.
6. Apps and Addons. [It'll be important to take note about if there are any customizations done on the Apps and Addons you are already using]. May be keeping copy of /local directory of these apps and addons would help.

Now you can choose about how much thorough you want to be while checking on these pointers above

View solution in original post

amitm05
Builder

Out of curiosity, I'd want to know that why did you setup a new deployment rather than upgrading your existing one ? There must be a good reason for why one would increase the task level manifolds.

Now for your answer you can perform some of the following validations -
1. Compare licensing volumes. Overall and Per sourcetypes.
2. Compare knowledge objects e.g. Fields, Tags, Event Types, DMs, Lookups, Macros etc.
3. DM accelerations. These would be important if any of your searches are based on them.
4. Users and Roles
5. Any external Authentications that you might be using.
6. Apps and Addons. [It'll be important to take note about if there are any customizations done on the Apps and Addons you are already using]. May be keeping copy of /local directory of these apps and addons would help.

Now you can choose about how much thorough you want to be while checking on these pointers above

amitm05
Builder

Please accept as answer, if this responds to your query. Thanks.

0 Karma
Get Updates on the Splunk Community!

Splunk Custom Visualizations App End of Life

The Splunk Custom Visualizations apps End of Life for SimpleXML will reach end of support on Dec 21, 2024, ...

Introducing Splunk Enterprise 9.2

WATCH HERE! Watch this Tech Talk to learn about the latest features and enhancements shipped in the new Splunk ...

Adoption of RUM and APM at Splunk

    Unleash the power of Splunk Observability   Watch Now In this can't miss Tech Talk! The Splunk Growth ...