Splunk Enterprise

Site decommission- How originating data will be replicated?

giulioBalza
Path Finder

Hello,

i have to decommission a site due to datacenter dismission. Actually we have four sites with 10 indexers each.

The  site decommission is well documented, what is not clear is how the map of decommissioned site originating data is replicated to the remaining site, using:

site_mappings = site4:site2

originating data from site4 is replicated to site2, suppose there are 20TB of data, how many data every indexers on site2 receive ? Is there a sort af balancing (2TB each) or is not  predictable ?

Is also not clear if the replication bucket for the dismissed site are removed by Splunk when the cluster master is restarted or can be do manually.

I need this information to estimate if the actual size of file system is enough.

Thanks

 

Labels (1)
Tags (1)
0 Karma

richgalloway
SplunkTrust
SplunkTrust

As I read the docs. the site_mappings setting does not actually replicate any data.  It merely tells the CM to use a different set of primary buckets.  So changing the site mappings will not affect your existing storage use.

However, as indexers are shut down in the decommissioned site, the CM may need to copy buckets to another site to maintain the site replication/search factor.  That *will* affect your storage use.  In the end, the 3 sites will need enough storage for site_replication_factor copies of all of your data.

---
If this reply helps you, Karma would be appreciated.

giulioBalza
Path Finder

Thanks Rick,

probably is due to be not English native language, but this part from https://docs.splunk.com/Documentation/Splunk/8.2.6/Indexer/Decommissionasite

To deal with this issue, you can map decommissioned sites to active sites. The bucket copies for which a decommissioned site is the origin site will then be replicated to the active site specified by the mapping, allowing the cluster to again meet its replication and search factors.

mention replication bucket from decommissioned to active site.

Regards

 

0 Karma

isoutamo
SplunkTrust
SplunkTrust

Hi

I understand this as:

When you have set those parameters on CM and after restart and disabling maintenance mode it start fix up tasks where it ensure that if there are any bucket which don't fulfil SF+RF as previously some buckets have been on decommissioned site it start to fix/move those to site which "replace" that decommissioned site.

After that fixes has done you should remove old indexers on decommissioned site.  It's not said should you use "splunk offline --enforce-counts" or just "splunk offline" or even "splunk stop"?  Personally I take the 1st one and if there are issues with it then change to 2nd one. Maybe you should ask that from doc team (just write question on bottom of that page and they will reply to you.

After all nodes are removed then I additionally rebalance cluster data.

r. Ismo

richgalloway
SplunkTrust
SplunkTrust

Thanks for referring me to that document.  It confirms buckets are indeed copied when a site is mapped.

---
If this reply helps you, Karma would be appreciated.
0 Karma
Get Updates on the Splunk Community!

Stay Connected: Your Guide to May Tech Talks, Office Hours, and Webinars!

Take a look below to explore our upcoming Community Office Hours, Tech Talks, and Webinars this month. This ...

They're back! Join the SplunkTrust and MVP at .conf24

With our highly anticipated annual conference, .conf, comes the fez-wearers you can trust! The SplunkTrust, as ...

Enterprise Security Content Update (ESCU) | New Releases

Last month, the Splunk Threat Research Team had two releases of new security content via the Enterprise ...