- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Is it good practice to run an rsync script to take a backup of any new warm buckets created to a new partition?
I need to start backing up my Splunk and was looking at backing up any new Warm buckets. I'm planning to do this by running an rsync script to take a backup of any new warm bucket created to a new partition.
Is this a good practice?
I'm interested in knowing what other users are doing to backup their Splunk/indexes on Amazon EC2.
Thanks
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I've installed s3sync on the Splunk box which would sync buckets (hot/warm/cold) to the S3 storage.
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
watchout for your S3 policy in case it automatically removes files after some time
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

hi kkossery, In general I would believe this to be "not" a good practice. Mostly because it does not scale well and very config and env dependent. I would go with clustering to solve any of the availability requirements.
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thanks! I will wait on what others have to say on this.
