Migrate Lxc VPS using Zfs Replication

yena

Renowned Member
Nov 18, 2011
373
4
83
Hello, i'm tryng this procedure to migrate one or more LXC vps beetwin two phisical servers.

Snap creation:
zfs snapshot rpool/data/subvol-100-disk-1@SnapDay_18_10_2016_10:18:44

Send Snap:
zfs send -i rpool/data/subvol-100-disk-1@SnapDay_18_10_2016_10:17:20 rpool/data/subvol-100-disk-1@SnapDay_18_10_2016_10:18:44 | ssh -p 2222 192.168.58.2 zfs recv -F -d rpool/backup_rcv

Filesystem on the source server:

NAME USED AVAIL REFER MOUNTPOINT
rpool 9.92G 1.75T 96K /rpool
rpool/ROOT 806M 1.75T 96K /rpool/ROOT
rpool/ROOT/pve-1 806M 1.75T 806M /
rpool/backup_rcv 403M 1.75T 96K /rpool/backup_rcv
rpool/backup_rcv/data 403M 1.75T 96K /rpool/backup_rcv/data
rpool/backup_rcv/data/subvol-100-disk-1 402M 1.75T 387M /rpool/backup_rcv/data/subvol-100-disk-1
rpool/backup_rcv/data/subvol-100-disk-1@SnapDay_18_10_2016_10:12:50 15.1M - 343M -
rpool/backup_rcv/data/subvol-100-disk-1@SnapDay_18_10_2016_10:17:20 8K - 387M -
rpool/backup_rcv/data/subvol-100-disk-1@SnapDay_18_10_2016_10:18:44 0 - 387M -
rpool/data 241M 1.75T 96K /rpool/data


Now, i can "restore" my vps Cloning the last Snapshot on the dest server
zfs clone rpool/backup_rcv/data/subvol-100-disk-1@SnapDay_18_10_2016_10:18:44 rpool/data/subvol-200-disk-1

AND
root@PrxIaki2Saes:/etc/zfs_replica# zfs promote rpool/data/subvol-200-disk-1
root@PrxIaki2Saes:/etc/zfs_replica#
root@PrxIaki2Saes:/etc/zfs_replica# zfs list -t all
NAME USED AVAIL REFER MOUNTPOINT
rpool 9.92G 1.75T 96K /rpool
rpool/ROOT 806M 1.75T 96K /rpool/ROOT
rpool/ROOT/pve-1 806M 1.75T 806M /
rpool/backup_rcv 192K 1.75T 96K /rpool/backup_rcv
rpool/backup_rcv/data 96K 1.75T 96K /rpool/backup_rcv/data
rpool/backup_rcv/data/subvol-100-disk-1 0 1.75T 387M /rpool/backup_rcv/data/subvol-100-disk-1
rpool/data 644M 1.75T 96K /rpool/data
rpool/data/subvol-200-disk-1 644M 1.75T 454M /rpool/data/subvol-200-disk-1
rpool/data/subvol-200-disk-1@SnapDay_18_10_2016_10:12:50 15.1M - 343M -
rpool/data/subvol-200-disk-1@SnapDay_18_10_2016_10:17:20 8K - 387M -
rpool/data/subvol-200-disk-1@SnapDay_18_10_2016_10:18:44 8K - 387M -
rpool/swap 8.50G 1.75T 64K -


So i have:
rpool/data/subvol-200-disk-1 241M 1.75T 454M /rpool/data/subvol-200-disk-1

And now i PROMOTE the CLONE:
zfs promote rpool/data/subvol-200-disk-1

root@PrxIaki2Saes:/etc/zfs_replica# zfs list -t all
NAME USED AVAIL REFER MOUNTPOINT
rpool 9.92G 1.75T 96K /rpool
rpool/ROOT 806M 1.75T 96K /rpool/ROOT
rpool/ROOT/pve-1 806M 1.75T 806M /
rpool/backup_rcv 192K 1.75T 96K /rpool/backup_rcv
rpool/backup_rcv/data 96K 1.75T 96K /rpool/backup_rcv/data
rpool/backup_rcv/data/subvol-100-disk-1 0 1.75T 387M /rpool/backup_rcv/data/subvol-100-disk-1
rpool/data 644M 1.75T 96K /rpool/data
rpool/data/subvol-200-disk-1 644M 1.75T 454M /rpool/data/subvol-200-disk-1
rpool/data/subvol-200-disk-1@SnapDay_18_10_2016_10:12:50 15.1M - 343M -
rpool/data/subvol-200-disk-1@SnapDay_18_10_2016_10:17:20 8K - 387M -
rpool/data/subvol-200-disk-1@SnapDay_18_10_2016_10:18:44 8K - 387M -
rpool/swap 8.50G 1.75T 64K -



and i put it on my /etc/pve/nodes/PrxIaki2Saes/lxc/200.conf
-----------------------------------------------------------------------------------------------------------------------------------
arch: amd64
cpulimit: 1
cpuunits: 1024
hostname: test.local
memory: 1024
net0: name=eth0,bridge=vmbr0,gw=185.36.72.1,hwaddr=46:F2:BD:2E:28:F0,ip=185.36.72.245/24,type=veth
ostype: debian
rootfs: local-zfs:subvol-200-disk-1,size=8G
swap: 512

-----------------------------------------------------------------------------------------------------------------------------------

Finally i can destroy the old snapshot ad rep:
zfs destroy -R rpool/data/subvol-200-disk-1@SnapDay_18_10_2016_10:18:44

I Have done a test and it work ..
Is there something wrong for a production use ? :)

It is very usefull beacuse if i "restore" a big snapshot sending it an the same volume, i use double space on the storage and it's very slow because i have to copy all on the rcv vol.

Thanks
 
Last edited:

I have tested PVE-zsync .. but i need a custom port to sent replication, so i have write my little script.
In any case, if i use PVE-zsync or manually zfs replication, is it correct to clone and promote the vps to recover it?
The zfs send procedure described in the pve-zsync doc is too slow and use two time space ...
Clone and promote is faster
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!