Converting RAW to qcow2 - For Snapshots

jchwsu86

New Member
Dec 6, 2020
5
9
3
My current installation of Proxmox 6.3-2 is configured for my vms to run on a local SSD. I have the SSD setup as an LVM.

I wanted to test the snapshot feature, but I receive an error, "The current guest configuration does not support taking new snapshots"
  • I found I can use the "Move Disk" feature in GUI, select the same as current source - However my "qcow2" option is greyed out

I don't want to run my virtual machines off of my CIFS drive, so I am trying to convert the files.
  • I have tried converting the file using:
    Code:
    qemu-img convert -f raw -O qcow2 /dev/VMs/vm-101-disk-0 /dev/VMs/vm-101-disk-0.qcow2
  • This appears to convert successfully, but after this is done. My virtual machines cannot access disk.
  • I have tried editing the config:
    Code:
    /etc/pve/qemu-server/101.config
    • This just results in the error "TASK ERROR: unable to parse volume filename 'vm-101-disk-0'"

How do you edit the location of the VM's drive? I tried editing the config above and changing the extension. I also tried to navigate to /var/lib/vz/images/101 but there is no data passed "images".

How do I tell the VM where the location of the "new" disk is?

Is it possible to do this on an LVM? Or is this truly limited to only network storage virtual machines?
 
Hi,

plain LVM does not support snapshots. LVM-thin does support snapshots.
Also LVM is a block level storage. Therefore, the VM images are always raw. You cannot convert them to .qcow2 (or vmdk) which is why it is greyed out in the GUI. If you try to move the disk to the default "local" storage (which is of type Directory which is file level), you will see that you can then choose .qcow2.

The path that you looked at /var/lib/vz is the default path of said "local" storage.
VM disks that are in your "local" storage will be in /var/lib/vz/images.
VM disks that are in your LVM storage will be mounted on /dev/pve/.

You can see all your storages in the GUI or by typing pvesm status into the CLI.
You can see all images that are on a storage by clicking on the storage in the GUI and selecting VM disks, or by typing pvesm list local (replace "local" with the required storage name).
You can see where a image of the previous command really is by typing, for example, pvesm path local:107/vm-107-disk-0.raw.

There is a comprehensive list of storage types and their properties in our wiki.
 
Last edited:
Thank you! Your explanation was very helpful. I didn't realize that with LVM-Thin you could take a snapshot even if it was still in RAW.

I do have a question though. You comment suggests that my storage (for non-local) lvs should be in /dev/pve - If you look at my info below, they don't seem to be there. But instead under /dev/VMs. I am looking at this correctly?

Bash:
root@pve:/dev/VMs# pvesm status
Name             Type     Status           Total            Used       Available        %
VMs           lvmthin     active       122404864        27528853        94876010   22.49%
local             dir     active         7159288         2428600         4347304   33.92%
local-lvm     lvmthin     active        13414400               0        13414400    0.00%
qnap             cifs     active      3787709536      2706681884      1081027652   71.46%
root@pve:/dev/VMs# pvesm list VMs
Volid             Format  Type             Size VMID
VMs:vm-101-disk-0 raw     images    10737418240 101
VMs:vm-103-disk-0 raw     images    34359738368 103
root@pve:/dev/VMs# cd /dev/pve/
root@pve:/dev/pve# ls
data  root  swap
root@pve:/dev/pve# cd /dev/VMs
root@pve:/dev/VMs# ls
vm-101-disk-0  vm-103-disk-0
 
Hi,

the part about /dev/pve was maybe a bit unprecise.
If you type cat /etc/pve/storage.cfg you will see your exact storage configuration. For each of your lvmthin storages, it should contain an element "vgname".
Those vgnames are what you should see in your folder /dev. So in your cases there should be once vgname pve and a folder /dev/pve and once vgname VMs and a folder /dev/VMs.

If you click through the ISO installer and choose ext4 as filesystem, then there is a ext4 filesystem & and a default LVM volume group pve. So that's where the pve that I mentioned comes from.

To get an idea of all the LVM stuff, it might be interesting to compare the output of the following commands
Code:
lvs
lvdisplay
vgs
vgdisplay
pvs
pvdisplay
df -Th
fdisk -l