Hi all,
I am a bit confused w/r to the visibility of a VM's LVM devices from the PVE host.
Inside LVMs I have partitions like this:
/dev/vda1 => /boot
/dev/vda2 => extended partition
/dev/vda5 => LVM physical volume
and then one logical volume for root and one for swap.
This is the result of having used the debian installer and choosing to use the whole disk (/dev/vda) and enabling lvm.
Now, from the proxmox node, I can see all these devices, which seems odd to me.
From other threads I gather that some people have seen this when using e.g. /dev/vda inside a VM as a physical device, but I am not doing that.
To add to my confusion, if I try to migrate on of those VMs (e.g. moving the disk to ceph and then migrate the VM), I get an error telling me that the VM still uses local disks.
Any ideas on how to solve this? (Apart of refraining from using lvm inside VMs from now on).
Thanks,
Martin
I am a bit confused w/r to the visibility of a VM's LVM devices from the PVE host.
Inside LVMs I have partitions like this:
/dev/vda1 => /boot
/dev/vda2 => extended partition
/dev/vda5 => LVM physical volume
and then one logical volume for root and one for swap.
This is the result of having used the debian installer and choosing to use the whole disk (/dev/vda) and enabling lvm.
Now, from the proxmox node, I can see all these devices, which seems odd to me.
From other threads I gather that some people have seen this when using e.g. /dev/vda inside a VM as a physical device, but I am not doing that.
To add to my confusion, if I try to migrate on of those VMs (e.g. moving the disk to ceph and then migrate the VM), I get an error telling me that the VM still uses local disks.
Any ideas on how to solve this? (Apart of refraining from using lvm inside VMs from now on).
Thanks,
Martin