Deployment Architecture

Archiving of clustered indexes

lohit
Path Finder

HI All,

I have some concerns on archiving of logs of clustered indexes.

1.How to do archiving of clustered indexes since if we go by normal script, duplicate copies will be archived because of cluster. How we can cluster only one unique bucket across the cluster. What is the best practice to do this ?
2.How we can archive the data from Splunk to Amazon S3 directly since to access Amazon S3 we need to provide parameters like Amazon Secret key, Amazon Key id, name of S3 bucket. How we can achieve this ? Donot wanna uSe shuttl as it maintain its own index and my searches have a huge dependency on main index.

Please provide some inputs on it.

Lohit

Tags (3)
0 Karma

gfuente
Motivator

Hello

the answer to the first question is in the docs:

http://docs.splunk.com/Documentation/Splunk/6.0/Indexer/Backupindexeddata#Clustered_data_backups

Regards

0 Karma
Career Survey
First 500 qualified respondents will receive a $20 gift card! Tell us about your professional Splunk journey.

Can’t make it to .conf25? Join us online!

Get Updates on the Splunk Community!

Can’t Make It to Boston? Stream .conf25 and Learn with Haya Husain

Boston may be buzzing this September with Splunk University and .conf25, but you don’t have to pack a bag to ...

Splunk Lantern’s Guide to The Most Popular .conf25 Sessions

Splunk Lantern is a Splunk customer success center that provides advice from Splunk experts on valuable data ...

Unlock What’s Next: The Splunk Cloud Platform at .conf25

In just a few days, Boston will be buzzing as the Splunk team and thousands of community members come together ...