Migration suggestion - from Proxmox to Proxmox

parker0909

Well-Known Member
Aug 5, 2019
82
0
46
36
Hi All,

We are planning to migrate one Proxmox Virtual machine(standalone Host) to another Proxmox(Cluster with replication). May i know some suggestion we can do the migration with less down time?
We know that If we using the backup and restore method it may cause a long period for restoring and down time. Thank you.

Parker
 
Hi,
if possible you can add the standalone Host also to the Custer and migrate with storage.
If that was successful you can remove the node from the cluster.
 
Thank you. May i know how to live migrate if i added the standalone Hos into cluster?
There should not any share storage such as SAN Storage.
 
Perhaps the following collection of commands are usefull:

Code:
Convert or copy images:
  Copy RBD Image between Ceph pools (eg rbd_hdd -> rbd_ssd):
    # This command copies the source image to the destination image, honouring parent clone references.
    #   ie: It copies data from the delta.
      rbd deep cp rbd_hdd/vm-108-disk-0 rbd_ssd/vm-108-disk-0;
    # This command copies the content in to a new independent image.
    #   ie: It copies data from both the base source and the delta.
      rbd cp rbd_hdd/vm-108-disk-0 rbd_ssd/vm-108-disk-0;
    # Add something like '--data-pool ec_ssd' to either copy operation to store data in an erasure coded pool.
    pico /etc/pve/nodes/kvm7e/qemu-server/108.conf
      rbd_hdd -> rbd_ssd
    #rbd rm rbd_hdd/vm-108-disk-0

  Copy RBD Image to QCoW2, sparse aware:
    qemu-img convert -f raw -O qcow2 -t unsafe -T unsafe -cWp rbd:rbd_hdd/vm-119-disk-0 /var/lib/vz/template/iso/vm-119-disk-0.qcow2;
      'c' compresses output file, relatively quick and size isn't much off so I wouldn't separately gzip the raw file.
      Herewith an example of a newly deployed system:
        rbd du rbd_hdd/vm-119-disk-0    4.9 GiB
        QCoW2                           5.7 GiB
        QCoW2 compressed                2.3 GiB
        GZip compressed QCoW2        2.1 GiB        # not multi-threaded, so can take a significant amount of time

  Copy QCoW2 to Ceph RBD Image:
    qemu-img convert -f qcow2 -O raw -t unsafe -T unsafe -nWp /var/lib/vz/template/iso/vm-119-disk-0.qcow2.compressed rbd:rbd_hdd/vm-119-disk-0;

  Copy RBD Image, uses thin provisioning and skips zeros:
    qemu-img convert -f raw -O raw -t unsafe -T unsafe -nWp rbd:rbd_hdd/vm-213-disk-1 rbd:rbd_ssd/vm-213-disk-1_new;

  Copy QCoW2 image to new RBD image:
    qemu-img convert -f qcow2 -O raw -t unsafe -T unsafe -nWp source.qcow2 rbd:rbd_hdd/vm-999-disk-0

  Copy RBD Image to VHD (Microsoft Virtual PC), creates a dynamic VHD which skips zeros:
    qemu-img convert -f raw -O vpc -t unsafe -T unsafe -o subformat=dynamic -p -S 512 rbd:rbd_ssd/vm-239-disk-1 images/labournet-cms1-old.vhd;

  Copy RBD partition to another RBD partition (sector alignment correction):
    qemu-img convert -f raw -O raw -t unsafe -T unsafe -nWp /dev/rbd16p1 /dev/rbd17p1

  Copy VMDK to RBD, uses thin provisioning and skips zeros:
    rbd create rbd_hdd/onos-tutorial --size 200G;
    qemu-img convert -f vmdk -O raw -T unsafe -nWp ./onos-tutorial-1.15.0-disk001.vmdk rbd:rbd_hdd/onos-tutorial;


You can also use the following perl monster to reach 4 MiB chunks of a block device and write the differences to another, where checksums of the of the blocks are different. This is great if you can create a snapshot, transfer the data and then only shut down the VM to do a subsequent copy where you then only need to transfer the differences:

Code:
Syncing block devices (source to target):
  NB: Advanced use, *WILL* destroy data if you are not carefull

  target:
    Map the device to obtain a block device reference name (/dev/rdb0 in this example)
    rbd map rbd_hdd/vm-100-disk-0
  source:
    export dev1='/dev/lvm0/vm-105-disk-0';
    export dev2='/dev/rbd0';            # output of the 'rbd map' on the target
    export remote='root@kvm1a.company.co.za';

    ssh -o StrictHostKeyChecking=no $remote "
      perl -'MDigest::MD5 md5' -ne 'BEGIN{\$/=\4194304};print md5(\$_)' $dev2" |
      perl -'MDigest::MD5 md5' -ne 'BEGIN{$/=\4194304};$b=md5($_);
        read STDIN,$a,16;if ($a eq $b) {print "s"} else {print "c" . $_}' $dev1 |
      ssh -o StrictHostKeyChecking=no $remote "
       perl -ne 'BEGIN{\$/=\1} if (\$_ eq\"s\") {\$s++} else {if (\$s) {
        seek STDOUT,\$s*4194304,1; \$s=0}; read ARGV,\$buf,4194304; print \$buf}' 1<> $dev2"

NB: You should setup SSH key based logins so that you the script can login via PuTTY Agent or another SSH key based authentication agent.
 
Herewith some notes on creating a SSH public key pair and then syncing block devices by only transferring the differences by first compressing them via lzop. Great for inter data centre copies...


Code:
Network based block replication:
  NB: Requires 'lzop' package!
  PS: Requires SSH keys to be generated on the local system:
        cd /root/.ssh
        ssh-keygen -t rsa -C 'RSync Transfer - Autologin' -f rsync_rsa

       Add public key portion to /root/.ssh/authorized_keys on the remote host.

[----------------------- /etc/cron.daily/zzzzzz-network-kvm-backup -----------------------]
lvcreate -L 10G /dev/vg_kvm/adserver -s -n adserver-snap1 > /dev/null;
export dev1='/dev/vg_kvm/adserver-snap1';
export dev2='/dev/rbd0';
export remote='root@kvm7e.company.co.za';

ssh -i /root/.ssh/rsync_rsa -o StrictHostKeyChecking=no $remote "
  perl -'MDigest::MD5 md5' -ne 'BEGIN{\$/=\4194304};print md5(\$_)' $dev2 | lzop -c" |
  lzop -dc | perl -'MDigest::MD5 md5' -ne 'BEGIN{$/=\4194304};$b=md5($_);
    read STDIN,$a,16;if ($a eq $b) {print "s"} else {print "c" . $_}' $dev1 | lzop -c |
  ssh -i /root/.ssh/rsync_rsa -o StrictHostKeyChecking=no $remote "lzop -dc |
   perl -ne 'BEGIN{\$/=\1} if (\$_ eq\"s\") {\$s++} else {if (\$s) {
    seek STDOUT,\$s*4194304,1; \$s=0}; read ARGV,\$buf,4194304; print \$buf}' 1<> $dev2"
lvremove -f /dev/vg_kvm/adserver-snap1 > /dev/null;
[----------------------- /etc/cron.daily/zzzzzz-network-kvm-backup -----------------------]

The above creates a LVM2 snapshot where 10 GiB of data is available to store deltas whilst the snapshot is being used. Please note that Proxmox by default uses LVM2 thin provisioning so the lvcreate commands will need to be slightly different. The above example was used to create a snapshot on non-thin LVM2 storage, transfer the difference and then release the temporary snapshot. If the snapshot reaches 100% then the source snapshot is no longer consistent!
 
Thank you. May i know how to live migrate if i added the standalone Hos into cluster?
There should not any share storage such as SAN Storage.
You can migrate with Storage between Cluster-Nodes that have no shared Storage.
1635427631500.png
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!