this has happened 8 out of 20 times
when I move kvm hard disk from zfs to ceph
a copy of the drive gets to ceph .
the original drive is still at zfs .
example from a move log:
then the zfs can not be destroyed:
and the vm can not be migrated to another system. that fails at end of process because the zfs destroy pary fails.
/etc/pve/qemu-server/116.conf:
on one node i resorted to restarting the node. still can not destroy the zfs.
it seems like a configuration file somewhere makes it so the zfs is still part of the vm.
not being able to migrate a vm kills high availability
any clues to solve this?
when I move kvm hard disk from zfs to ceph
a copy of the drive gets to ceph .
the original drive is still at zfs .
example from a move log:
Code:
create full clone of drive scsi0 (data:vm-116-disk-0)
transferred: 0 bytes remaining: 8589934592 bytes total: 8589934592 bytes progression: 0.00 %
transferred: 85899345 bytes remaining: 8504035247 bytes total: 8589934592 bytes progression: 1.00 %
transferred: 171798691 bytes remaining: 8418135901 bytes total: 8589934592 bytes progression: 2.00 %
..
transferred: 8426725834 bytes remaining: 163208758 bytes total: 8589934592 bytes progression: 98.10 %
transferred: 8512625180 bytes remaining: 77309412 bytes total: 8589934592 bytes progression: 99.10 %
transferred: 8589934592 bytes remaining: 0 bytes total: 8589934592 bytes progression: 100.00 %
transferred: 8589934592 bytes remaining: 0 bytes total: 8589934592 bytes progression: 100.00 %
zfs error: cannot destroy 'data/vm-116-disk-0': dataset is busy
TASK OK
then the zfs can not be destroyed:
Code:
# zfs destroy data/vm-116-disk-0
cannot destroy 'data/vm-116-disk-0': dataset is busy
and the vm can not be migrated to another system. that fails at end of process because the zfs destroy pary fails.
/etc/pve/qemu-server/116.conf:
Code:
boot: c
bootdisk: scsi0
cores: 4
memory: 6816
name: mail
net0: virtio=42:20:57:2C:83:01,bridge=vmbr0,tag=3
numa: 0
onboot: 1
ostype: l26
protection: 1
scsi0: ceph_vm:vm-116-disk-0,discard=on,size=8G
scsihw: virtio-scsi-pci
smbios1: uuid=31ec85be-1d63-46b5-ab84-a29cda1df0aa
sockets: 1
on one node i resorted to restarting the node. still can not destroy the zfs.
it seems like a configuration file somewhere makes it so the zfs is still part of the vm.
not being able to migrate a vm kills high availability
any clues to solve this?