v4.4 move_disk (unable to parse volume ID)

Mark vd Putten

New Member
May 18, 2017
3
0
1
47
Goal:
Move lvm disks to GlusterFS, to move some vms to new hardware and have the mail gateway HA.

Test:
I got one newly created test VM moved, to GlusterFS, First it did not boot, but after setting the disk cache to write through it did.

Failure:
don't know if this is relevant, existing machines where created before the second machine was joined to the cluster. Backing up and changing hardware settings is no problem on existing vm's

Shutdown an existing vm , backed it up. and tried the move again. This time it did not work. The error returned: "unable to parse volume ID '/dev/cinder-volumes/vm-108-disk-1'". This is the correct volume ID.

From the command line:
Code:
qm move_disk 108 virtio0 GV01 -format qcow2
result:
Code:
unable to parse volume ID '/dev/cinder-volumes/vm-108-disk-1'

Wondering why the volume ID can't be parsed?
 
because that is a pass-through block device, which is not managed by PVE.
 
I don't quite understand how this relates to the 'volume ID' parsing?
Besides a newly test VM with its disk on the same lvm volume group worked well.

There seems to be a work around:
Manually copy the logical volume to a raw disk file on <GlusterFS mount point>/images/<vmid>/ using 'dd', and changing the vm config file at /etc/pve/nodes/pve/qemu-server/<vmid>.conf to match the new disk.
Code:
bootdisk: scsi0
scsi0: GV01:<vmid>/vm-<vmid>-disk-1.raw,cache=writethrough,size=8G
 
I don't quite understand how this relates to the 'volume ID' parsing?
Besides a newly test VM with its disk on the same lvm volume group worked well.

There seems to be a work around:
Manually copy the logical volume to a raw disk file on <GlusterFS mount point>/images/<vmid>/ using 'dd', and changing the vm config file at /etc/pve/nodes/pve/qemu-server/<vmid>.conf to match the new disk.
Code:
bootdisk: scsi0
scsi0: GV01:<vmid>/vm-<vmid>-disk-1.raw,cache=writethrough,size=8G

PVE will only move/delete/... disks which are managed by PVE. If you pass through a block device, PVE will use it (pass it through to the VM), but it won't manage it. A volume managed by PVE is something which is referenced as "SOMESTORAGE:SOMEVOLUME", where SOMESTORAGE is a configured storage, and SOMEVOLUME is some identifier (can look different depending on storage and volume type).

E.g.,
"scsi0: /dev/sda" is not managed by PVE
"scsi0: /mnt/pve/something/image.raw" is not managed by PVE
"scsi0: /var/lib/vz/images/101/vm-101-disk-1.raw" is not managed by PVE
"scsi0: local:101/vm-101-disk-1.raw" is managed by PVE, and might be the same actual disk images as the previous one
"scsi0: /dev/mapper/myvg/vm-100-disk-1" is not managed by PVE
"scsi0: myvg:vm-100-disk-1" is managed by PVE, and might be the same actual disk image as the previous one
"scsi0: somestorage:base-101-disk-1/vm-100-disk-1 " is also managed by PVE
 
  • Like
Reactions: _gabriel
The path does seem to follow the PVE naming conventions, did you add it as path manually to the config at some point instead of a storage:diskname pair?

I don't quite understand how this relates to the 'volume ID' parsing?
You're right in that the error message is misleading and could be improved. What happens is that the guest management layer tells the storage layer to perform the move without checking whether the disk is actually managed by the storage layer, and since the config contains the raw path (/dev/...) rather than a PVE-managed storage id (storage:diskname) the storage layer fails to recognize it and throws this error.
 
Oke, this makes sense!
Knowing this now makes the error message meaningful. And explains why the newly created test vm moved as expected.

@wbumiller
Looking back in my notes, indeed that disk is not created by pve but was an already existing machine. The disk is imported as following:
Code:
qm set 108 -virtio0 /dev/cinder-volumes/vm-108-disk-1

@fabian
Making the disk reference managed. from
Code:
"virtio0: /dev/cinder-volumes/vm-108-disk-1,size=20G"
to
Code:
"virtio0: pve-lvm:vm-108-disk-1,size=20G"
Makes the difference

Thanks for sharing!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!