It all depends on what you would want to do - if you're looking for data redundancy in the same data center (for example), I would use clustering.
A cluster is a group of indexers configured to replicate each others' data, so that the system keeps multiple copies of all data. This process is known as index replication. By maintaining multiple, identical copies of data, clusters prevent data loss while promoting data availability for searching.
The clustering requires additional hardware in the way of a Clustering master plus additional indexers to keep copy the indexes on.
Now, if you're looking for something along the lines of cold storage / dr (having an indexer in another datacenter for example) then sending duplicate data is the best way to handle that. I don't know if they still offer it, but you may want to talk to your sales person about the HA license.
Brian
... View more