![]() ![]() The snapshot file refers to chunks that contain the full snapshot. That file would need to be defrosted first but since snapshot files are small and in separate folder they can be left in hot storage rather easily. To restore files duplicacy first downloads the snapshot file. How do I retrieve data from GDA without having to defrost the entire 100TB+ backupĭuplicacy was not designed to work with cold storage however you can workaround this with a bit of scripting: until then – snapshot would need to be deleted manually Subsequent prune -exhaustive will remove unreferenced chunks from the datastore to free some It would be great if web-ui supported purging the specific snapshot ID from the storage, just as it supports pruning of specific revisions. *This would be something that you will need to do manually: go to NAS2 storage and delete the snapshot corresponding to the project from snapshots folder. Once you are ready to archive specific project – you would do a copy task to AWS for a the corresponding snapshot ID(via copy task on Schedules tab you can use it for manual tasks too), wipe that project from the NAS2 repository*, and then run prune -exhaustive task on the NAS2 storage (from the Schedule tab). Then setup scheduled backups in the Schedule tab as needed. On the Backup tab you would create backups per project, into the NAS2 storage. The way I see it you would need to create two storage destination on Storage tab: One representing storage on NAS2 and the other – on AWS (that would be transitionable to Glacier via bucket lifecycle rules) Duplicacy terminology is confusing (snapshot vs repository vs destination) so I’ll speak in terms Web UI is using. Perhaps it would be easier to script all that with duplicacy CLI, but it’s also possible to do in web UI. Because that would literally take a week for each project.Īt the moment, I’m setting up a new destination folder on NAS2 for each Project on NAS1 which is a functional but suboptimal solution, especially when the speed of our project turnover goes up. I would love to be able to set up single backup destination for all workspaces to benefit from maximum deduplication, but then be able to extract a particular project from the backup without having to do a restore, followed by new backup of that project only, followed by prune of that project from master backup. We have our reasons for doing it this way. The up to date backup of the space is mirrored to AWS GDAįinally, in a year’s time, th backup from NAS2 is deleted, leaving only one copy sitting on AWS.the workspace for that project on NAS1 is deleted to liberate space.When the project is wrapped, two things happen: NAS1 is where we work from, each project we work on has a Share/Workspace/Mapped Drive assigned to it, the contents of each of those workspaces need to be backed up to NAS2 regularely. Why do you need 50 destinations?īecasue later, I need to move the latest backup of the oldest share offsite. With Duplicati it’s fairly easy as I can set different destination every time I set up a new job, but with Duplicacy destinations are set up separately with a unique ID, having to do 50 of them, then wipe 50 of them to add new ones every year seems like a very tedious solution…Īlso, will I then be able to restore from cloud (recalling to S3 Standard storage first) on another comupter ?Įlaborate here. What is the best way of going about this ? I can perform that part of the process manually, however I need a way of being able to extract and remove a particular share from the backup when time comes. The reason for it is becasue the retention policy is such that anything older than 1 year has to be sent off to AWS Glacier DA. With that in mind, what I also need is to be able to do is either:Ī.) keep all these shares in separate self contained backupsī.) have a way of extracting a particular share from the backup as well as shrinking the master backup after extraction. The reson for it is becasue NAS1 has a propriatary file system and a locked down OS so there is not way of running Duplicacy on NAS1 or making NAS2 See it directly. I have a very specific use case where I need to perpetually back up multiple large (50x2-10TB and growning) shares from NAS1 to NAS2 using a PC running Duplicacy in between. However there is a specific workflow that I need to be able to acomplish in order to deploy this as a permanent solution. I’ve been testing multiple backup workflows and solutions over the last week and have to say am really impressed with Duplicacy so far.
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |