Archive

Can you help me with my Splunk question involving disaster recovery?

Builder

We have 10 indexers in our environment of 3TB each along with Forwarder and search heads.

We do backup each of these but the restore size is too high.~ 30TB.

Any suggestions on improving on this....

0 Karma
1 Solution

SplunkTrust
SplunkTrust

So, you don't really have a lot of data (even all those disks full is only 30 TB), but the backup and restore time is higher than you'd like.

There's a lot of non-Splunk answers here involving making your backups faster. This might mean optimizing network paths, upgrading equipment, ... or maybe even doing the backups in an entirely different way (switching from tape to disk backup, things like that.)

But I expect you are exploring Splunk-based answers.

As inventsekar alluded to, your best bet will be to convert to or build a new indexer cluster. THIS IS NOT BACKUPS so you have to continue to make those, but it should remove or at least vastly reduce your reliance on backups because it provides data resilience. Set your replication factor (RF) as high as you'd like for peace of mind - a RF of 2 would keep two copies of data in the entire cluster, and if one of those disappears it'll rebuild the portions that went away (for instance, a failed server) onto the remaining systems. It will take up twice the space, but ... well, you've got to make some concessions somewhere.

You can read more here - there's a LOT of other places to look but the following link should give you other links to follow and also should give you keywords to search more on.
http://docs.splunk.com/Documentation/Splunk/7.2.1/Indexer/Basicclusterarchitecture

View solution in original post

0 Karma

SplunkTrust
SplunkTrust

So, you don't really have a lot of data (even all those disks full is only 30 TB), but the backup and restore time is higher than you'd like.

There's a lot of non-Splunk answers here involving making your backups faster. This might mean optimizing network paths, upgrading equipment, ... or maybe even doing the backups in an entirely different way (switching from tape to disk backup, things like that.)

But I expect you are exploring Splunk-based answers.

As inventsekar alluded to, your best bet will be to convert to or build a new indexer cluster. THIS IS NOT BACKUPS so you have to continue to make those, but it should remove or at least vastly reduce your reliance on backups because it provides data resilience. Set your replication factor (RF) as high as you'd like for peace of mind - a RF of 2 would keep two copies of data in the entire cluster, and if one of those disappears it'll rebuild the portions that went away (for instance, a failed server) onto the remaining systems. It will take up twice the space, but ... well, you've got to make some concessions somewhere.

You can read more here - there's a LOT of other places to look but the following link should give you other links to follow and also should give you keywords to search more on.
http://docs.splunk.com/Documentation/Splunk/7.2.1/Indexer/Basicclusterarchitecture

View solution in original post

0 Karma

SplunkTrust
SplunkTrust

Has this answer helped you solve your problem? If so, please "Accept" it so the next person stumbling across it in a search will know that this worked.

If this is still unresolved, please provide more information on what's not working.

If this is resolved and you've found another answer - great! Post that here as an answer, and go ahead and mark it as Accepted! It's OK to gather karma for yourself occasionally like that.

0 Karma

Builder

The clustering option surely is a good solution, i will look into this.
Another option would be to limit your data by applying retention. This makes sure your data size is always in check which makes backups easier.

Thanks a lot Rich..

0 Karma

Super Champion

We have 10 indexers in our environment of 3TB each
Are the indexers in a cluster?

0 Karma

Builder

No clustering..

0 Karma