VM migration always resulting in error

damjank

Member
Apr 2, 2020
27
0
21
Hello friends,

I have second pve host/node now. I created small cluster, I want to migrate machines to it - not all, some will stay on original one, just distribute actually.
Now I added node into cluster, no issues, also presented storage - on both there is storage named "prod_storage", actually same NVMe disk. I select a VM on a primary node, select migration, choose disk and off we go. It take relatively short time to finish - I migrated several VMs now and all are actually on second pve host but it always says completed with errors:

Code:
020-11-13 17:47:48 ERROR: removing local copy of 'prod_storage_pve2:vm-122-disk-0' failed - zfs error: cannot destroy 'prod_storage/vm-122-disk-0': dataset is busy
drive-scsi0: transferred: 64514621440 bytes remaining: 0 bytes total: 64514621440 bytes progression: 100.00 % busy: 0 ready: 1
all mirroring jobs are ready
drive-scsi0: Completing block job...
drive-scsi0: Completed successfully.
drive-scsi0 : finished
2020-11-13 17:47:49 # /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=pve2' root@10.0.0.54 pvesr set-state 122 \''{}'\'
2020-11-13 17:47:50 stopping NBD storage migration server on target.
2020-11-13 17:47:54 ERROR: migration finished with problems (duration 00:16:17)
migration problems

And when I look at the disk I see this in webgui:
Screenshot 2020-11-13 at 18.51.58.png

Now - I am burning twice the storage, right? How do I remove those manually, since I always get that the resource is busy. Also, VM has attached the disk with "-1" in name - so the original "-0" is like - not used or what? Any guidance is appreciated! thx in advance!

rgD
 
OK so I can use zfs to destroy those using zfs list and then for instance zfs destroy prod_storage/vm-108-disk-0

Still - is this done sometime later automatically by some job? Is this expected behavior?
 
Hi,
if the underlying ZFS storage is shared, but PVE doesn't know about it, you'll run into such problems. It thinks the disk is not already present on the target, because it's configured as a local storage. Thus, it copies the disk (and since there is a disk with that name on the target, the disk is renamed). Consider configuring a ZFS over iSCSI storage instead.
 
Excellent suggestion Fabian. I created another ZFS, and by trial and error and renaming and removing drives, I managed to migrate VM but that cannot be practice. I will take your suggestion and implement immediately! Thx, will report back findings.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!