Unable to migrate to ZFS

Joey

Active Member
Nov 12, 2017
2
0
41
40
Hi All,

I've moved virtual disks between storage types a number of times without problems. However, all of the sudden I'm unable to move disks to zfs:

Code:
create full clone of drive virtio0 (local-vz:112/vm-112-disk-1.raw)
transferred: 0 bytes remaining: 85899345920 bytes total: 85899345920 bytes progression: 0.00 %
qemu-img: Could not open 'zeroinit:/dev/zvol/rpool/data/vm-112-disk-1': Could not open '/dev/zvol/rpool/data/vm-112-disk-1': No such file or directory
TASK ERROR: storage migration failed: copy failed: command '/usr/bin/qemu-img convert -p -n -f raw -O raw /rpool/vz/images/112/vm-112-disk-1.raw zeroinit:/dev/zvol/rpool/data/vm-112-disk-1' failed: exit code 1

Moving from ZFS to dir is no problem.

Of course '/dev/zvol/rpool/data/vm-112-disk-1' doesn't exist, it should be created as part of the migration. Adding a new disk to a VM on ZFS is also no problem.

As a work-around, I can backup the disk and restore it to ZFS, this works as it should

Any suggestions?
 
Last edited:
Okay...

After restoring the backup, all dir->zfs migrations are working again, something even a reboot couldn't fix...
 
Okay...

After restoring the backup, all dir->zfs migrations are working again, something even a reboot couldn't fix...

I've been hit by the same problem, and when you say "restoring the backup", does that mean you've restored the Proxmox, or just the VM?

restoreing the VM for me works to one ZFS zpool, but not the other pool ;(
 
Okay, I think is is related to the other rpool/etc. errors/problems: the ZFS events doesn't gets responded and serviced/etc. quick enough by systemd-udevd, and the "-t 10" for udevadm in the ZFSPoolPlugin.pm is too short in this error condition....

The other symptom that is hitting me, is that after a reboot, none of the VMs start, as they are also complaining about "can't find /dev/zvol/...." but a zfs list shows everything, and then after awhile, running qm start starts to work for the VMs....

SO, the issue here, is related to udev, and it taking it's time to short things out..... how do I debug this???

Linux proxfr01 4.15.15-1-pve #1 SMP PVE 4.15.15-6 (Mon, 9 Apr 2018 12:24:42 +0200) x86_64 GNU/Linux
Code:
root@proxfr01:~# pveversion --verbose
proxmox-ve: 5.1-43 (running kernel: 4.15.15-1-pve)
pve-manager: 5.1-52 (running version: 5.1-52/ba597a64)
pve-kernel-4.4.15-1-pve: 4.4.15-60
pve-kernel-4.2.8-1-pve: 4.2.8-41
corosync: 2.4.2-pve5
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.0-8
libpve-apiclient-perl: 2.0-4
libpve-common-perl: 5.0-30
libpve-guest-common-perl: 2.0-15
libpve-http-server-perl: 2.0-8
libpve-storage-perl: 5.0-19
libqb0: 1.0.1-1
lvm2: 2.02.168-pve6
lxc-pve: 3.0.0-2
lxcfs: 3.0.0-1
novnc-pve: 0.6-4
openvswitch-switch: 2.7.0-2
proxmox-widget-toolkit: 1.0-15
pve-cluster: 5.0-26
pve-container: 2.0-22
pve-docs: 5.1-17
pve-firewall: 3.0-8
pve-firmware: 2.0-4
pve-ha-manager: 2.0-5
pve-i18n: 1.0-4
pve-libspice-server1: 0.12.8-3
pve-qemu-kvm: 2.11.1-5
pve-xtermjs: 1.0-3
qemu-server: 5.0-25
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
zfsutils-linux: 0.7.7-pve1~bpo9
root@proxfr01:~#
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!