fergus
03-17-2010, 04:15 PM
I have a 250G drive dual-booting Windows and Linux and beautifully partitioned to allow swap partitions, FAT32 drives for two versions of Cygwin, and others (from /dev/sda1 to /dev/sda10). The architecture has been polished and refined over time and I would hate to have to reproduce it after a fault or loss.
The back up procedure is not to back up new/ changed files drive by drive, or even to copy drive to drive, but to copy an image of the entire device /dev/sda to a remote drive, using dd. Of course, the entire 250059350016 byte device is copied to a 250059350016 copy, this way.
It's worth it to do it this way and not to get into the vocabulary of chasing new/ changed files, mimicking the same gymnastics from drive to drive, etc. AND the boot partition is also backed up should there be a requirement to use it. The down side is that it takes 2h20m to achieve and if I wanted to look for just one file within the backup, this is of course impossible, the back up just consisting as it does of a single 250G binary file (with lots of zeroes).
Needless to say, all the initial partitioning and subsequent routine backup is achieved with Knoppix.
Question 1. Is there something like partimage (but not actually partimage) that will back up _an_entire_device_ /dev/sda rather than individual partitions /dev/sda[1-10] but that would work faster and more efficiently using some kind of internal zipping protocol (as I assume partimage does, reducing say a 16G partition to a 5G image). Possibly reduce the 250G image to dunno? 80G or something and all in an hour or less.
Question 2. Could dd_rescue be implemented at all usefully? (I have no bad sectors, but maybe dd_rescue skates efficiently over swathes of 00s or something, I dunno?)
Fergus
The back up procedure is not to back up new/ changed files drive by drive, or even to copy drive to drive, but to copy an image of the entire device /dev/sda to a remote drive, using dd. Of course, the entire 250059350016 byte device is copied to a 250059350016 copy, this way.
It's worth it to do it this way and not to get into the vocabulary of chasing new/ changed files, mimicking the same gymnastics from drive to drive, etc. AND the boot partition is also backed up should there be a requirement to use it. The down side is that it takes 2h20m to achieve and if I wanted to look for just one file within the backup, this is of course impossible, the back up just consisting as it does of a single 250G binary file (with lots of zeroes).
Needless to say, all the initial partitioning and subsequent routine backup is achieved with Knoppix.
Question 1. Is there something like partimage (but not actually partimage) that will back up _an_entire_device_ /dev/sda rather than individual partitions /dev/sda[1-10] but that would work faster and more efficiently using some kind of internal zipping protocol (as I assume partimage does, reducing say a 16G partition to a 5G image). Possibly reduce the 250G image to dunno? 80G or something and all in an hour or less.
Question 2. Could dd_rescue be implemented at all usefully? (I have no bad sectors, but maybe dd_rescue skates efficiently over swathes of 00s or something, I dunno?)
Fergus