LXC slow migration workaround

yarii

Renowned Member
Mar 24, 2014
147
8
83
In this example I got two remotes VPS1 and VPS2. In the past when the container was <500G I was using rsync and backup/restore method.
Now when the container has almost 2TB. I am not able to migrate this container with resonable downtime.

So for workaround I did:

# example1: initial replicate single pool/subvol/zvol using SSH to remote host
Code:
zfs snapshot zfs1-vps1/subvol-103-disk-0@migracja
zfs send zfs1-vps1/subvol-103-disk-0@migracja | ssh root@10.100.0.4 zfs recv zfs1-vps2/subvol-103-disk-0@migracja

#LXC container stop
Code:
pct stop 103

# send incremental snapshot
Code:
zfs snapshot zfs1-vps1/subvol-103-disk-0@migracja2
zfs send -R -I zfs1-vps1/subvol-103-disk-0@migracja zfs1-vps1/subvol-103-disk-0@migracja2 | ssh root@10.100.0.4 zfs recv zfs1-vps2

# on remote machine edit container rootfs config
Code:
vim /etc/pve/lxc/104.conf
s|zfs1-vps1/subvol-103-disk-0|zfs1-vps2/subvol-103-disk-0|/g

# start LXC container on remote machine
Code:
pct start 103

# < 5 minutes downtime ! WHOA!

#last thing is to check if all things migrated OK and just destroy old zfs subvol and its snapshot
Code:
zfs destroy zfs1-vps1/subvol-103-disk-0@migracja
zfs destroy zfs1-vps1/subvol-103-disk-0

Is it someway to archive this method using GUI?
Button Volume Action --> Move Storage doesnt support that "must have" function.

Copying LXC with 20mln files takes few days more than copying on block layer and almost no downtime.
Its also impossible to make consitency when data changes a lot.
 
Last edited:
Would be interesting how fast lvmsync would do the same job which is possible same way but just other cmd's. :)
 
This was second main thing that I jumped out to ZFS stack from LVM.
So for ext4 we should use "lvmsync" and for ZFS "zfs send/recv".