Migrating VM using LVM to ZFS

okieunix1957

Member
Feb 11, 2020
71
5
8
68
I have an issue were the host I am migrating to has ZFS on it instead of EXT4 and the VM was build using LVM.

020-03-09 18:50:19 starting migration of VM 132 to node 'host10' (xxx.xxx.xxx.21)
2020-03-09 18:50:19 found local disk 'local-lvm:vm-132-disk-1' (via storage)
2020-03-09 18:50:19 found local disk 'local-lvm:vm-132-disk-2' (via storage)
2020-03-09 18:50:19 copying disk images
Debian GNU/Linux 9
Volume group "pve" not found
Cannot process volume group pve <<<< THis is the problem
command '/sbin/lvs --separator : --noheadings --units b --unbuffered --nosuffix --options vg_name,lv_name,lv_size,lv_attr,pool_lv,data_percent,metadata_percent,snap_percent,uuid,tags,metadata_size pve' failed: exit code 5
command 'dd 'if=/dev/pve/vm-132-disk-2' 'bs=64k'' failed: got signal 13
send/receive failed, cleaning up snapshot(s)..
2020-03-09 18:50:20 ERROR: Failed to sync data - command 'set -o pipefail && pvesm export local-lvm:vm-132-disk-2 raw+size - -with-snapshots 0 | /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=sc4-devops-qa-prxmx-host10' root@172.16.16.21 -- pvesm import local-lvm:vm-132-disk-2 raw+size - -with-snapshots 0' failed: exit code 5
2020-03-09 18:50:20 aborting phase 1 - cleanup resources
2020-03-09 18:50:20 ERROR: found stale volume copy 'local-lvm:vm-132-disk-2' on node 'host10'
2020-03-09 18:50:20 ERROR: migration aborted (duration 00:00:02): Failed to sync data - command 'set -o pipefail && pvesm export local-lvm:vm-132-disk-2 raw+size - -with-snapshots 0 | /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=sc4-devops-qa-prxmx-host10' root@XXX.XXX.XXX.21 -- pvesm import local-lvm:vm-132-disk-2 raw+size - -with-snapshots 0' failed: exit code 5
TASK ERROR: migration aborted

the destination host is ZFS. so how can I do this without have to rebuild the node again.

Phillip
 
Last edited:
Migration needs the same storge type, so what you try is not possible. Try backup/restore instead.
 
you could also try to create a ZFS dataset of an appropriate size and then just "dd" the LVM-content into the new ZFS dataset.
Would also need the VM to be offline, but might be faster than a backup / restore.
 
have you tried a live migration of the VM running? With a current PVE version you should be able to select a target storage. In that situation it should be possible to migrate to a target of a different type.
 
Migration needs the same storge type, so what you try is not possible. Try backup/restore instead.

Thanks but that means downtime which we cannot afford to do at this time.
I was thinking why not create pve LVM group
you could also try to create a ZFS dataset of an appropriate size and then just "dd" the LVM-content into the new ZFS dataset.
Would also need the VM to be offline, but might be faster than a backup / restore.

That idea is good but not at this time. We don't want to have to keep doing that.
I am going to rebuild the node using ext4 LVM this time. That way there no problem
 
have you tried a live migration of the VM running? With a current PVE version you should be able to select a target storage. In that situation it should be possible to migrate to a target of a different type.

Do you mind telling me exactly how that can be done? Thanks,...
 
If you are running PVE 6 and migrate a running VM you should see a drop-down field in the dialog for the target storage.
 
  • Like
Reactions: serafin.rusu
When I try this I get:

zfs error: cannot open 'rpool': no such pool

2020-09-27 16:15:47 ERROR: Failed to sync data - could not activate storage 'ZFS', zfs error: cannot open 'rpool': no such pool
2020-09-27 16:15:47 aborting phase 1 - cleanup resources
2020-09-27 16:15:47 ERROR: migration aborted (duration 00:00:01): Failed to sync data - could not activate storage 'ZFS', zfs error: cannot open 'rpool': no such pool
TASK ERROR: migration aborted
 
Scratch that, I just found the problem, it works pefectly :)
Would you mind sharing what the problem was? It might help other people in a similar situation. :)
 
Yes no problem, with me it was that I had just added the nodes to the cluster. This meant the storages were showing on each server. What I had to do was simply go into each storage and edit it to restrict the storage to its own server.

storage.png

I hope this helps, I assumed everyone else knew this and it was just me :)
 
  • Like
Reactions: serafin.rusu
Hi @aaron and @Digitaldaz I was having the exact same problem and this solution helped me.

Just out of curiosity, is it a technical limitation or is it a bug that only live migration can change storage? I would have thought it is easier to do things when the VM is turned off.

Also ,as containers can't be live migrated. This solution does not work for them. I tried to take backup in pve with LVM, but I am not able to move backup to another pve which has ZFS. Unless of course some sort of shared storage is used. Even cloning only clones to a shared storage. Is there a workaround for containers?