Proxmox disks changed from qcow to raw after migration

Nakata

Renowned Member
Jun 13, 2012
52
0
71
Hello,
i have just installed new cluster with Proxmox 7.0. Now it is with 2 nodes with slightly different versions of PVE,
pve-manager/7.0-11/63d82f4e and pve-manager/7.0-8/b1dbf562
When i live migrate VM from one node to another (doesn't matter from which one to second one, problem is appearing both ways), qcow format of disks is changed to raw. I thought this problem was solved in version 6 ? What should i do?

FYI: I'm in proces of migration from 6.1.3, had one new server, created new 7.0 cluster, have backed up VMs from old 6.1.3. cluster node, restored on new server, than empty server reinstalled 6.1.3 to 7.0, plugged into new cluster, want migrate VMs back and continue with reinstalling rest of nodes same way. There is not much space for experiments, i had only one spare server and one production node is already migrated to 7.0, second node also reinstalled to 7.0

I also tried to create new VM on new 7.0 server, migrated to other one and it was changed to raw format again.

to summarize, line from /etc/pve/qemu-server/116.conf
- this is the disc definition on old 6.1,3 cluster:
virtio0: local:804/vm-116-disk-0.qcow2,format=qcow2,size=80G
- then it was backed up in Stop mode to NFS and restored to new 7.0 server, it looked like this:
virtio0: local-lvm:vm-116-disk-0,cache=none,media=disk,size=80G
- after live migration to another 7.0 node:
virtio0: local-lvm:vm-116-disk-0,cache=none,format=raw,media=disk,size=80G
 
Last edited:
you can convert back to qcow2 with qemu-img. I would report as a bug as well.

Example

Code:
qemu-img convert -f raw -O qcow2 vm-100-disk-0.raw vm-100-disk-0.qcow2
 
Hello,
i have just installed new cluster with Proxmox 7.0. Now it is with 2 nodes with slightly different versions of PVE,
pve-manager/7.0-11/63d82f4e and pve-manager/7.0-8/b1dbf562
When i live migrate VM from one node to another (doesn't matter from which one to second one, problem is appearing both ways), qcow format of disks is changed to raw. I thought this problem was solved in version 6 ? What should i do?

FYI: I'm in proces of migration from 6.1.3, had one new server, created new 7.0 cluster, have backed up VMs from old 6.1.3. cluster node, restored on new server, than empty server reinstalled 6.1.3 to 7.0, plugged into new cluster, want migrate VMs back and continue with reinstalling rest of nodes same way. There is not much space for experiments, i had only one spare server and one production node is already migrated to 7.0, second node also reinstalled to 7.0

I also tried to create new VM on new 7.0 server, migrated to other one and it was changed to raw format again.

to summarize, line from /etc/pve/qemu-server/116.conf
- this is the disc definition on old 6.1,3 cluster:
virtio0: local:804/vm-116-disk-0.qcow2,format=qcow2,size=80G
- then it was backed up in Stop mode to NFS and restored to new 7.0 server, it looked like this:
virtio0: local-lvm:vm-116-disk-0,cache=none,media=disk,size=80G
- after live migration to another 7.0 node:
virtio0: local-lvm:vm-116-disk-0,cache=none,format=raw,media=disk,size=80G

You can move the disk in the vm's hardware settings, to a different proxmox-storage location and select RAW as output. Then you can move it back again and also select RAW. Maybe this MacGyvering will work for you?
 
You can move the disk in the vm's hardware settings, to a different proxmox-storage location and select RAW as output. Then you can move it back again and also select RAW. Maybe this MacGyvering will work for you? You do not have to have the vm powered off while moving it on the same host.
 
Hi,
And does qcow2 over LVM-thin make sense ? Here is stated, that only raw format is suported on LVM-Thin storage
https://pve.proxmox.com/wiki/Storage:_LVM_Thin
anyway, if i migrate images of stopped servers, that format=raw is not in the config...
if you want to use qcow2, you need a filesystem-based storage. Is there a special reason you want to use qcow2? LVM-Thin also supports snapshots for that matter. The format=XYZ in the configuration is (mostly) informational, i.e. changing it will not change the actual format and it isn't strictly required to be there.
 
Thank you for the info Fabian.
So LVM-thin supports only raw storage and the situation is normal ?

I just didn't understand why VMs on LVM thin are showing different format
This one was imported from backup made on stopped VM
virtio0: local-lvm:vm-116-disk-0,cache=none,media=disk,size=80G
And this one was the same live migrated to another node
virtio0: local-lvm:vm-116-disk-0,cache=none,format=raw,media=disk,size=80G
 
Thank you for the info Fabian.
So LVM-thin supports only raw storage and the situation is normal ?
Yes.

I just didn't understand why VMs on LVM thin are showing different format
This one was imported from backup made on stopped VM
virtio0: local-lvm:vm-116-disk-0,cache=none,media=disk,size=80G
And this one was the same live migrated to another node
virtio0: local-lvm:vm-116-disk-0,cache=none,format=raw,media=disk,size=80G
That's just a quirk of how things are implemented (likely some historical reason behind it). Not having the explicit format parameter shouldn't affect anything, Proxmox VE will determine the format when it's needed.
 
  • Like
Reactions: Nakata

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!