VM Storage migration

digidax

Renowned Member
Mar 23, 2009
99
1
73
Hi,
I want to change the disk format of a VM which places on a ZFS storage, but it's greyed out:
1587110034722.png
But if I change the target storage to a GlusterFS mount, I can select the disk format.
1587110123245.png
I was stumbling about this because a VM on Node1 has a qcow2 format and a VM on Node 2 has a raw format. The VM with the raw format produces more I/O load, so I want to change the disk format of the VM.

Any Ideas?

Code:
proxmox-ve: 6.1-2 (running kernel: 5.3.18-2-pve)
pve-manager: 6.1-8 (running version: 6.1-8/806edfe1)
pve-kernel-helper: 6.1-7
pve-kernel-5.3: 6.1-5
pve-kernel-5.3.18-2-pve: 5.3.18-2
pve-kernel-5.3.10-1-pve: 5.3.10-1
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.3-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.15-pve1
libpve-access-control: 6.0-6
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.0-17
libpve-guest-common-perl: 3.0-5
libpve-http-server-perl: 3.0-5
libpve-storage-perl: 6.1-5
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 3.2.1-1
lxcfs: 3.0.3-pve60
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.1-3
pve-cluster: 6.1-4
pve-container: 3.0-22
pve-docs: 6.1-6
pve-edk2-firmware: 2.20200229-1
pve-firewall: 4.0-10
pve-firmware: 3.0-6
pve-ha-manager: 3.0-9
pve-i18n: 2.0-4
pve-qemu-kvm: 4.1.1-4
pve-xtermjs: 4.3.0-1
qemu-server: 6.1-7
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.3-pve1
 
if you use the zpool as zfs storage, you cannot choose the format because the disks get created as zvols, not as files
if you want qcow2 format you have to choose a 'file-based' storage
 
Okay, but why is a VM on Node1 with qcow2 format and a VM on Node 2 has a raw format ?
Was it done during the creation process of the VM? Is it possible to make a backup / restore procedure and then set during restoring the disk format to qcow2 ? Sorry for asking, I'm working with VM for the first time.
 
Update:
I have moved the disk (RAW) from ZFS to GlusterFS storage as target qcow2. Then I moved it back to ZFS.
Now it shows on ZFS:
1587133850428.png
and as expected, the I/O wait time is decreasing.

Any explanations? Is it a bug or feature? What disk format does the VM now has?
 
Your vm "disk" format is raw, can't be anything different in ZFS pool volume. Since possible formats depend upon the type of storage you have, in your screenshot the "Target Storage" is not shown, so the "Format" is the default, but has no meaning until you choose the "Target storage". Once you choose the "Target storage" then you will have in "Format" only the supported file format available on that specific storage.
The fact that the "Target storage" field is empty could be a bug or just a browser glitch, in any case you can't move disk without specify the target storage.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!