Migration stop/failed, zfs disk use space and can't be deleted

cbx

Active Member
Mar 2, 2012
45
1
28
Hello

I am trying to do some move disk from an NAS to local, but sometime I had to stop them for the overload of server and do it again when activity is lower. But when I do it, the zfs disk are created and use space in server, but not appear as "unused" disk in VM and can't be deleted from gui (no active boton).

I have see this links : https://forum.proxmox.com/threads/how-to-delete-a-zfs-image-file.32288/ but when I migrate the virtual in other node.... it also copy the zfs disk! Why? it's not not appear in the VM :


/usr/bin/kvm -id 108 -chardev socket,id=qmp,path=/var/run/qemu-server/108.qmp,server,nowait -mon chardev=qmp,mode=control -pidfile /var/run/qemu-server/108.pid -daemonize -smbios type=1,uuid=327d88bc-1ba9-4549-9279-6002266d3e30 -name XXXXXX -smp 8,sockets=1,cores=8,maxcpus=8 -nodefaults -boot menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg -vga std -vnc unix:/var/run/qemu-server/108.vnc,x509,password -cpu kvm64,+lahf_lm,+sep,+kvm_pv_unhalt,+kvm_pv_eoi,enforce -m 7168 -k es -device pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e -device pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f -device piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2 -device usb-tablet,id=tablet,bus=uhci.0,port=1 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3 -iscsi initiator-name=iqn.1993-08.org.debian:01:bedf51f2cb88 -drive if=none,id=drive-ide2,media=cdrom,aio=threads -device ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=100 -device virtio-scsi-pci,id=scsihw0,bus=pci.0,addr=0x5 -drive file=/dev/QnapClusterHosting4/vm-108-disk-2,if=none,id=drive-scsi0,cache=writethrough,format=raw,aio=threads,detect-zeroes=on -device scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,bootindex=200 -drive file=/dev/QnapClusterHosting4/vm-108-disk-1,if=none,id=drive-scsi1,cache=writethrough,format=raw,aio=threads,detect-zeroes=on -device scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=1,drive=drive-scsi1,id=scsi1 -netdev type=tap,id=net0,ifname=tap108i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on -device virtio-net-pci,mac=FA:A3:29:D5:54:C7,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=300 -netdev type=tap,id=net1,ifname=tap108i1,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on -device virtio-net-pci,mac=DA:43:91:20:3D:FC,netdev=net1,bus=pci.0,addr=0x13,id=net1,bootindex=301

when do the migration to other node... also copy the failed local-zfs... I don't know how the migration do the relation between this image disk and virtual as don't appear in VM details...

2017-10-24 10:15:19 starting migration of VM 108 to node 'b10'
2017-10-24 10:15:19 found local disk 'local-zfs:vm-108-disk-2' (via storage)
2017-10-24 10:15:19 found local disk 'local-zfs:vm-108-disk-3' (via storage)
2017-10-24 10:15:19 copying disk images
send from @ to rpool/data/vm-108-disk-2@__migration__ estimated size is 3.46G
total estimated size is 3.46G
TIME SENT SNAPSHOT
10:15:21 77.0M rpool/data/vm-108-disk-2@__migration__
10:15:22 162M rpool/data/vm-108-disk-2@__migration__
10:15:23 243M rpool/data/vm-108-disk-2@__migration__
10:15:24 334M rpool/data/vm-108-disk-2@__migration__
10:15:25 445M rpool/data/vm-108-disk-2@__migration__
10:15:26 578M rpool/data/vm-108-disk-2@__migration__
10:15:27 670M rpool/data/vm-108-disk-2@__migration__

It's safe to delete it with zfs destroy name/of/dataset and it will not affect the VM that still working with the NAS? Why the migration include these disk? use the name server for this relation (108)?
 
VM migration and destruction both also check for disks associated with that VM by ID, even if not referenced in the VM configuration. if you are sure this is a leftover disk from a failed migration that just hasn't been correctly cleaned up, you can manually delete it.
 
  • Like
Reactions: cbx
VM migration and destruction both also check for disks associated with that VM by ID, even if not referenced in the VM configuration. if you are sure this is a leftover disk from a failed migration that just hasn't been correctly cleaned up, you can manually delete it.

Thanks a lot for this explain...
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!