I am looking for the best way to provide redundancy to my Splunk solution and plan for the future at the same time. Current ingestion is roughly 20 GB/day and am planning on that doubling within the next few years. The current solution is a single server acting as a search head and indexer.
My concern is that a single system does not provide redundancy. I would like to go ahead and build redundancy into my setup, but I'd like an informed opinion on which option to take.
The two options that I'm currently aware of is a Splunk Cluster setup and a duplicate Indexer setup.
The Splunk Cluster has the benefit of providing redundancy while not requiring me to purchase additional licensing. On the other hand I will have to procure additional servers to build the cluster. Taking a duplicate Indexer setup I can have my forwarders send a copy of everything to two(or more) Indexers. This will require doubling my Splunk license and doubling the network traffic required to deliver the data to the indexers.
I know that the recommended number of indexers for 20 GB/day is one so a cluster may be a little bit excessive.
I appreciate any advice you can provide.
It all depends on what you would want to do - if you're looking for data redundancy in the same data center (for example), I would use clustering.
A cluster is a group of indexers configured to replicate each others' data, so that the system keeps multiple copies of all data. This process is known as index replication. By maintaining multiple, identical copies of data, clusters prevent data loss while promoting data availability for searching.
The clustering requires additional hardware in the way of a Clustering master plus additional indexers to keep copy the indexes on.
Now, if you're looking for something along the lines of cold storage / dr (having an indexer in another datacenter for example) then sending duplicate data is the best way to handle that. I don't know if they still offer it, but you may want to talk to your sales person about the HA license.
Brian
It all depends on what you would want to do - if you're looking for data redundancy in the same data center (for example), I would use clustering.
A cluster is a group of indexers configured to replicate each others' data, so that the system keeps multiple copies of all data. This process is known as index replication. By maintaining multiple, identical copies of data, clusters prevent data loss while promoting data availability for searching.
The clustering requires additional hardware in the way of a Clustering master plus additional indexers to keep copy the indexes on.
Now, if you're looking for something along the lines of cold storage / dr (having an indexer in another datacenter for example) then sending duplicate data is the best way to handle that. I don't know if they still offer it, but you may want to talk to your sales person about the HA license.
Brian