[SOLVED] How to move linked clone with base to another node?

ivensiya

Active Member
Feb 17, 2013
60
1
28
I use ZFS disks and linked clones. When i backup linked VM (disk size on zfs list 1Gb), vzdump backup it with base disk (disk size on zfs list 10Gb). Then i restore linked vm on another node, my vm restore without base disk and it size 11Gb. On other node i have same Base template, but .... Can i move my linkeds VMs on other nodes with Base templates?

Solved

Code:
node-from# zfs send -R rpool/base-555-disk-1@__base__ | ssh "node-to" zfs recv rpool/base-555-disk-1
node-from# zfs snapshot rpool/vm-800-disk-1@__send__
node-from# zfs send -Rv -i rpool/base-555-disk-1@__base__ rpool/vm-800-disk-1@__send__ | ssh "node-to"  zfs recv rpool/vm-800-disk-1
 
Last edited:
I use ZFS disks and linked clones. When i backup linked VM (disk size on zfs list 1Gb), vzdump backup it with base disk (disk size on zfs list 10Gb). Then i restore linked vm on another node, my vm restore without base disk and it size 11Gb. On other node i have same Base template, but .... Can i move my linkeds VMs on other nodes with Base templates?

not via vzdump - vzdump dumps the disk content, and does not care about the underlying storage technology and metadata. you can use ZFS with send/receive or a distributed/shared storage with linked clone support to achieve what you want (if I understood you correctly).
 
not via vzdump - vzdump dumps the disk content, and does not care about the underlying storage technology and metadata. you can use ZFS with send/receive or a distributed/shared storage with linked clone support to achieve what you want (if I understood you correctly).

ZFS send / receive making same as vzdump, copy all data linked + base. At night I'm tested it with test linked clone, I made new vm, it disk (volume in zfs) size 8 kb. When I send and receive it to other node zfs created disk size 10gb like base vm.
 
Last edited:
ZFS send / receive making same as vzdump, copy all data linked + base. At night I'm tested it with test linked clone, I made new vm, it disk (volume in zfs) size 8 kb. When I send and receive it to other node zfs created disk size 10gb like base vm.

what I mean is that with ZFS, you can do the following
  • create template on node1
  • zfs send template volume to node2
  • create linked clone on node1
  • zfs send incremental diff between template and clone from node1 to node2
of course you need to have the base data on both nodes...
 
what I mean is that with ZFS, you can do the following
  • create template on node1
  • zfs send template volume to node2
  • create linked clone on node1
  • zfs send incremental diff between template and clone from node1 to node2
of course you need to have the base data on both nodes...

Ok. I send base template. Now i try to send linked clone

Code:
zfs send -i rpool/vm-800-disk-1 rpool/vm-800-disk-1  | ssh dc02 zfs recv rpool/vm-800-disk-1
cannot receive: failed to read from stream
How to send correct command?
 
Code:
zfs send -i rpool/vm-800-disk-1@test rpool/vm-800-disk-1 | ssh dc02 zfs recv rpool/vm-800-disk-1
cannot receive incremental stream: destination 'rpool/vm-800-disk-1' does not exist
dc02# zfs create -s -V 32G rpool/vm-800-disk-1
Code:
cannot receive incremental stream: most recent snapshot of rpool/vm-800-disk-1 does not
match incremental source
 
Code:
$ zfs create -V 1G fastzfs/testsource/basevolume
$ mkfs.ext4 /dev/zvol/fastzfs/testsource/basevolume
mke2fs 1.42.12 (29-Aug-2014)
Discarding device blocks: done
Creating filesystem with 262144 4k blocks and 65536 inodes
Filesystem UUID: eeac548a-98f1-474e-80dd-1e26b3b77538
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376

Allocating group tables: done
Writing inode tables: done
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: done

$ mkdir testvol
$ mount /dev/zvol/fastzfs/testsource/basevolume testvol
$ dd if=/dev/urandom of=testvol/testfile bs=1M count=16
16+0 records in
16+0 records out
16777216 bytes (17 MB) copied, 1.10679 s, 15.2 MB/s
$ umount testvol
$ zfs snapshot fastzfs/testsource/basevolume@__base__
$ zfs create fastzfs/testtarget
$ zfs send -R fastzfs/testsource/basevolume@__base__ | zfs receive fastzfs/testtarget/basevolume
$ zfs list -r -t all fastzfs/testsource fastzfs/testtarget
NAME                                     USED  AVAIL  REFER  MOUNTPOINT
fastzfs/testsource                      1.10G   114G    96K  /fastzfs/testsource
fastzfs/testsource/basevolume           1.10G   115G  65.2M  -
fastzfs/testsource/basevolume@__base__      0      -  65.2M  -
fastzfs/testtarget                      1.10G   114G    96K  /fastzfs/testtarget
fastzfs/testtarget/basevolume           1.10G   115G  65.2M  -
fastzfs/testtarget/basevolume@__base__      0      -  65.2M  -
$ zfs clone fastzfs/testsource/basevolume@__base__ fastzfs/testsource/linkedclone
$ mount /dev/zvol/fastzfs/testsource/linkedclone testvol
$ dd if=/dev/urandom of=testvol/testfile bs=1M count=16
16+0 records in
16+0 records out
16777216 bytes (17 MB) copied, 1.05181 s, 16.0 MB/s
$ umount testvol
$ zfs snapshot fastzfs/testsource/linkedclone@__send__
$ zfs send -Rv -i fastzfs/testsource/basevolume@__base__ fastzfs/testsource/linkedclone@__send__ | zfs receive fastzfs/testtarget/linkedclone
send from @ to fastzfs/testsource/linkedclone@__send__ estimated size is 16.3M
total estimated size is 16.3M
TIME        SENT   SNAPSHOT
$ zfs list -r -t all fastzfs/testsource fastzfs/testtarget
NAME                                      USED  AVAIL  REFER  MOUNTPOINT
fastzfs/testsource                       1.11G   114G    96K  /fastzfs/testsource
fastzfs/testsource/basevolume            1.10G   115G  65.2M  -
fastzfs/testsource/basevolume@__base__       0      -  65.2M  -
fastzfs/testsource/linkedclone           16.4M   114G  81.4M  -
fastzfs/testsource/linkedclone@__send__      0      -  81.4M  -
fastzfs/testtarget                       1.11G   114G    96K  /fastzfs/testtarget
fastzfs/testtarget/basevolume            1.10G   115G  65.2M  -
fastzfs/testtarget/basevolume@__base__       0      -  65.2M  -
fastzfs/testtarget/linkedclone           16.4M   114G  81.4M  -
fastzfs/testtarget/linkedclone@__send__      0      -  81.4M  -
 
  • Like
Reactions: ivensiya
@fabian Thank you so much!

Solved it on my proxmox cluster:

Code:
node-from# zfs send -R rpool/base-555-disk-1@__base__ | ssh "node-to" zfs recv rpool/base-555-disk-1
node-from# zfs snapshot rpool/vm-800-disk-1@__send__
node-from# zfs send -Rv -i rpool/base-555-disk-1@__base__ rpool/vm-800-disk-1@__send__ | ssh "node-to"  zfs recv rpool/vm-800-disk-1
 
I found it much easier to setup replication jobs for the base & linked clones, then just move the lxc/qemu conf file from one node to the other. Goes without saying this would have to be an offline migration, and has to be done from shell instead of web GUI.

For example:
Bash:
mv /etc/pve/nodes/node1/qemu-server/100.conf /etc/pve/nodes/node2/qemu-server/
 
  • Like
Reactions: mn_ntt and Asano
I found it much easier to setup replication jobs for the base & linked clones, then just move the lxc/qemu conf file from one node to the other. Goes without saying this would have to be an offline migration, and has to be done from shell instead of web GUI.

For example:
Bash:
mv /etc/pve/nodes/node1/qemu-server/100.conf /etc/pve/nodes/node2/qemu-server/
Thanks so much for this suggestion, it works brilliantly. Offline migration via replication for base first and then link clones . Then the manual mv in CLI did it for me.
 
  • Like
Reactions: Elliott Partridge

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!