Full system backup

volmis

New Member
Jan 10, 2011
4
0
1
What is the best way to backup the system drive for cluster servers with minimal downtime? I have built a cluster (1 master 1 node). I do not have hardware raid, and have tried the unsupported soft-raid install and ran into issues during updates. Really, too much hassle to go that route. So, I have 1 disk as system drive. My thoughts are:


  1. migrate VMs from master to slave node
  2. reboot master from live distro (any thing with dd)
  3. Code:
    dd if=/dev/sda of=[backup hard drive]
  4. reboot master from /dev/sda
  5. migrate VMs from slave to master
  6. repeat 1-5 on slave node

All of my VM disk images will be stored on a network appliance. I'm using 160gb drives now. I'm thinking that I should shrink the root partitions down as much as possible (5gb should be enough, less would be better). If I do that, then I could minimize the amount of time needed to dd everything by running:

Code:
sfdisk -d /dev/sda | sfdisk [backup hard drive]
dd if=/dev/sda1 of=[backup partition 1]
dd if=/dev/sda2 of=[backup partition 2]
What are your thoughts on this? Do you foresee any issues going this route? Thanks!
 
What is the best way to backup the system drive for cluster servers with minimal downtime? I have built a cluster (1 master 1 node). I do not have hardware raid, and have tried the unsupported soft-raid install and ran into issues during updates. Really, too much hassle to go that route. So, I have 1 disk as system drive. My thoughts are:


  1. migrate VMs from master to slave node
  2. reboot master from live distro (any thing with dd)
  3. Code:
    dd if=/dev/sda of=[backup hard drive]
  4. reboot master from /dev/sda
  5. migrate VMs from slave to master
  6. repeat 1-5 on slave node

All of my VM disk images will be stored on a network appliance. I'm using 160gb drives now. I'm thinking that I should shrink the root partitions down as much as possible (5gb should be enough, less would be better). If I do that, then I could minimize the amount of time needed to dd everything by running:

Code:
sfdisk -d /dev/sda | sfdisk [backup hard drive]
dd if=/dev/sda1 of=[backup partition 1]
dd if=/dev/sda2 of=[backup partition 2]
What are your thoughts on this? Do you foresee any issues going this route? Thanks!
Hi,
why you want to save so much empty and well known space?
If your system are broken, you can install in app. 15 min. a new basic system (if you don't have special disk-setup). With good internet connection you can maka a upgrade in less than 10 minutes.
If you after that restore your independed config (like /etc/pve, /etc/vz... or simple the whole /etc) you are back in life in app. an half hour. Of course you must have a backup of your VMs, but in your suggestion you have also no VMs included.

One big advantage for this solution is - you can make the backup during normal operations.

Udo
 
You are correct, no local VMs.
So just rsyncing /etc would suffice, even for cluster environment? I like the sound of that.
 
You are correct, no local VMs.
So just rsyncing /etc would suffice, even for cluster environment? I like the sound of that.
Hi,
/root can be also saved for the .forward-file (email) but this can easy provided during install.
If you restore the master, without change the node you must also get all iso-images/templates.
In this case it's easier, to define the running node to the master, and join the recreatet node to the cluster.
If the recreated node are synced (isos+templates are copied), you can switch the master again.

Udo
 
ISOs would be stored on the network. Templates are easily downloaded. I think the backup for these machines will be the easiest on my network.
Thanks for the valuable information, Udo!