Getting Data In

Archiving Best Practices for a Clustered Environment

marxsabandana
Path Finder

We currently have a C1 Architecture (3 clustered indexers/1 search head, replication factor of 3) and would like to ask if there are any best practices and guidelines on how to do it ourselves?

I've checked the docs and somehow it is indicated that having a replication factor of more than 3 can become more complicated to archive. Please see the excerpt below from https://docs.splunk.com/Documentation/Splunk/8.2.4/Indexer/Automatearchiving :

The problem of archiving multiple copies

Because indexer clusters contain multiple copies of each bucket. If you archive the data using the techniques described earlier in this topic, you archive multiple copies of the data.

For example, if you have a cluster with a replication factor of 3, the cluster stores three copies of all its data across its set of peer nodes. If you set up each peer node to archive its own data when it rolls to frozen, you end up with three archived copies of the data. You cannot solve this problem by archiving just the data on a single node, since there's no certainty that a single node contains all the data in the cluster.

The solution to this would be to archive just one copy of each bucket on the cluster and discard the rest. However, in practice, it is quite a complex matter to do that. If you want guidance in archiving single copies of clustered data, contact Splunk Professional Services. They can help design a solution customized to the needs of your environment.

Labels (2)
Get Updates on the Splunk Community!

Splunk Forwarders and Forced Time Based Load Balancing

Splunk customers use universal forwarders to collect and send data to Splunk. A universal forwarder can send ...

NEW! Log Views in Splunk Observability Dashboards Gives Context From a Single Page

Today, Splunk Observability releases log views, a new feature for users to add their logs data from Splunk Log ...

Last Chance to Submit Your Paper For BSides Splunk - Deadline is August 12th!

Hello everyone! Don't wait to submit - The deadline is August 12th! We have truly missed the community so ...