Some logic is completely missing here, what is it?

AngryAdm

Member
Sep 5, 2020
145
25
18
93
I ponder why I cannot migrate from storage type DIR to RBD... because it is a blatant lie.

Go to hardware, move disk from DIR to RBD = SUCCESS.
Once done, migrate the VM... it is now all running on ceph... litterally migrated from DIR to RBD....

WHY is PVE lying and rolling over like a little child?


And WHY god WHY is a migration searching "storage" and finding all sorts of duplicate images PVE somehow created or forgot to clean up during disk migration earlier?

Just migrate the drives in CONFIG and forget about this "find shit in storage and copy that aswell and create duplicates with a slightly different name" its a huge headache and completely illogical! and ANOYING!

Task viewer: VM 201 - Migrate (pve02 ---> pve01)

OutputStatus

Stop
2021-12-29 15:11:33 use dedicated network address for sending migration traffic (10.11.0.1)
2021-12-29 15:11:34 starting migration of VM 201 to node 'pve01' (10.11.0.1)
2021-12-29 15:11:34 found local disk 'PVE02-STORAGE2:201/vm-201-disk-0.raw' (via storage)
2021-12-29 15:11:34 found local disk 'PVE02-STORAGE2:201/vm-201-disk-2.raw' (in current VM config)
2021-12-29 15:11:34 copying local disk images
2021-12-29 15:11:34 ERROR: storage migration for 'PVE02-STORAGE2:201/vm-201-disk-0.raw' to storage 'SSD01' failed - cannot migrate from storage type 'dir' to 'rbd'
2021-12-29 15:11:34 aborting phase 1 - cleanup resources
2021-12-29 15:11:34 ERROR: migration aborted (duration 00:00:01): storage migration for 'PVE02-STORAGE2:201/vm-201-disk-0.raw' to storage 'SSD01' failed - cannot migrate from storage type 'dir' to 'rbd'
TASK ERROR: migration aborted
 
Offline `Move Disk` to and from RBD is not possible. Online `Move Disk` on the other hand works, as it uses a different mechanism (QEMU).
All disks with the same VMID are part of the same VM, even if they're not referenced in the config. If you don't want issues like that, remove the disk and always select to delete the disk after `Move Disk`.

Without this check, migration of a VM with unreferenced disks would lead to lots of fragmentation and even more headaches ;).

You can use qm rescan to scan all storages for unreferenced disks. These will then be added as `unused` disks to the config.
Once there you can check them all and delete any you don't need.
 
Hi, the qm rescan command was a usefull hint.

However, the unreferenced disks that pop up left and right is proxmox own creation if it fails to move disks. It does not always remove them even with the checkbox checked to delete after move.
 
Failing to remove the disk after a completed move is not a typical occurrence.
if you have logs from such a case, those would be very helpful.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!