No, it will likely cause lots of problems.
The master node is the node that picks the target of replications either for a newly created -ie hot - bucket or for a warm/cold bucket. It has no concept of 2 different sets of nodes and picks targets at random out of one global pool. So for either hot or warm/cold, it could pick a node in the same data center as the target.
For a warm/cold bucket, if a replication of a bucket fails because of the acceptFrom, the master would then again schedule another replication for that bucket. This would be repeated until it finally by random choice picks 2 nodes in different data centers ( so with 2 data centers say 2 tries to get it right).
For a hot bucket, the situation is worse. The source (or originating node) rolls the hot bucket on replication failure. If you have RF=3, and you have 2 data centers, it is likely that of the 2 targets at least one of them is local. Since local replication is blocked the hot bucket replication to that target would fail. And so the source would roll that bucket. This would happen repeatedly since every hot bucket created is being replicated to 2 other nodes one of whom is likely local. End result: lots of small buckets which would very badly impact search.
... View more