Disk Move issue from NFS

Code:
root@proxmox-01:~# ls -l /etc/pve/
total 8
-rw-r----- 1 root www-data  451 Nov 14 10:43 authkey.pub
-rw-r----- 1 root www-data  451 Nov 14 10:43 authkey.pub.old
-rw-r----- 1 root www-data  790 May  4  2022 ceph.conf
-rw-r----- 1 root www-data  557 May  9  2022 corosync.conf
-rw-r----- 1 root www-data  563 May  6  2022 corosync.conf.bak
-rw-r----- 1 root www-data  557 May  9  2022 corosync.conf.bak_2
-rw-r----- 1 root www-data  115 Nov  7 10:30 datacenter.cfg
-rw-r----- 1 root www-data  416 May 13  2022 domains.cfg
drwxr-xr-x 2 root www-data    0 May  2  2022 ha
-rw-r----- 1 root www-data 1201 Nov  7 10:16 jobs.cfg
lrwxr-xr-x 1 root www-data    0 Jan  1  1970 local -> nodes/proxmox-01
lrwxr-xr-x 1 root www-data    0 Jan  1  1970 lxc -> nodes/proxmox-01/lxc
drwxr-xr-x 2 root www-data    0 May  2  2022 nodes
lrwxr-xr-x 1 root www-data    0 Jan  1  1970 openvz -> nodes/proxmox-01/openvz
drwx------ 2 root www-data    0 May  2  2022 priv
-rw-r----- 1 root www-data 2074 May  2  2022 pve-root-ca.pem
-rw-r----- 1 root www-data 1675 May  2  2022 pve-www.key
lrwxr-xr-x 1 root www-data    0 Jan  1  1970 qemu-server -> nodes/proxmox-01/qemu-server
-rw-r----- 1 root www-data    0 Nov 14 14:35 replication.cfg
drwxr-xr-x 2 root www-data    0 May  2  2022 sdn
-rw-r----- 1 root www-data  130 Oct  7 11:20 status.cfg
-rw-r----- 1 root www-data 1004 Oct 18 12:48 storage.cfg
-rw-r----- 1 root www-data 2209 Nov 14 14:35 user.cfg
drwxr-xr-x 2 root www-data    0 May  2  2022 virtual-guest
-rw-r----- 1 root www-data  120 Nov 14 14:35 vzdump.cron

root@proxmox-01:~# ls -l /etc/pve/priv/ceph
total 2
-rw------- 1 root www-data 151 May  3  2022 ceph-01.keyring
-rw------- 1 root www-data  41 May  3  2022 cephfs.secret
-rw------- 1 root www-data 151 May 26 14:06 vmdisks-hdd.keyring
 
That looks good.

`qcow2` to `raw` on rbd works here just fine using my Ceph cluster. So the issue must be some configuration on your side.
Can you provide the output of the following commands?
pvesm list vmdisks-hdd
cat /etc/pve/ceph.conf
 
Also for nfs to rbd? Can the issue be in this specific migration path?

Hereby the output:

Code:
root@proxmox-01:~# pvesm list vmdisks-hdd
Volid                        Format  Type              Size VMID
vmdisks-hdd:base-110-disk-1  raw     images     10737418240 110
vmdisks-hdd:vm-105-disk-0    raw     images     10737418240 105
vmdisks-hdd:vm-105-disk-1    raw     images     21474836480 105
vmdisks-hdd:vm-106-disk-0    raw     images     34359738368 106
vmdisks-hdd:vm-106-disk-1    raw     images    133143986176 106
vmdisks-hdd:vm-108-disk-0    raw     images     42949672960 108
...... more disks like this
No disk from vm-132 seems to be here!

root@proxmox-01:~# cat /etc/pve/ceph.conf
[global]
         auth_client_required = cephx
         auth_cluster_required = cephx
         auth_service_required = cephx
         cluster_network = 10.10.42.10/24
         fsid = 79xxxxx6
         mon_allow_pool_delete = true
         mon_host = 10.10.42.10 10.10.42.20 10.10.42.30
         ms_bind_ipv4 = true
         ms_bind_ipv6 = false
         osd_pool_default_min_size = 2
         osd_pool_default_size = 3
         public_network = 10.10.42.10/24

[client]
         keyring = /etc/pve/priv/$cluster.$name.keyring

[mds]
         keyring = /var/lib/ceph/mds/ceph-$id/keyring

[mds.proxmox-01]
         host = proxmox-01
         mds_standby_for_name = pve

[mds.proxmox-02]
         host = proxmox-02
         mds standby for name = pve

[mon.proxmox-01]
         public_addr = 10.10.42.10

[mon.proxmox-02]
         public_addr = 10.10.42.20

[mon.proxmox-03]
         public_addr = 10.10.42.30
 
That's very strange.
NFS storages are handled just like directory storages, so that shouldn't make a difference.

Can you create a small 1GB disk on the NFS or on local storage and then try to move it to the RBD storage?
 
I just created a new disk on the NFS and attached it to the same VM. Just tried moving it to the RBD storage which just works fine. So there must be an issue with the virtual disk itself. I performed the following steps migrating from Azure:

- Created a snapshot
- Downloaded it to the TrueNAS directory on a Proxmox node.
- qemu-img convert -pf vpc <disk_name>.vhd -O qcow2 <disk_name>.qcow2
- Imported the disk to a newly created VM

Ran qemu-img info before converting:

Code:
qemu-img info <filename>
root@proxmox-01:/mnt/pve/TrueNAS/Azure qemu-img info osdisk.vhd
image: osdisk.vhd
file format: raw #Indicates its a raw
virtual size: 32 GiB (34359738880 bytes)
disk size: 1.32 GiB
 
Can you try it without specifying the format of the input file? It should auto-detect the format.
qemu-img convert -p <inputfile> -O qcow2 <outputfile>
 
Still no luck, performed following steps:

- qemu-img convert -p osdisk -O qcow2 osdisk.qcow2
- qm importdisk 132 osdisk.qcow2 vmdisks-hdd

Code:
copy failed: command '/usr/bin/qemu-img convert -p -n -f qcow2 -O raw osdisk.qcow2 'zeroinit:rbd:vmdisks-hdd/vm-132-disk-1:conf=/etc/pve/ceph.conf:id=admin:keyring=/etc/pve/priv/ceph/vmdisks-hdd.keyring'' failed: exit code 1

- edit /etc/pve/qemu-server/132.conf and add the new disk location as unused disk
- Disk is available in pve gui
- Try to move disk to ceph

Same error..
 
If you have the space, you could try to use the format `raw` instead to see if that makes a difference.
 
A colleague hinted at a known bug when source and target size are not the same which has been introduced with QEMU 5.1: https://bugzilla.proxmox.com/show_bug.cgi?id=3227

Perhaps, since qemu-img is used, it's the exact same issue here.

Could you resize your VM disk so that its size is 4M-aligned? That should satisfy the RBD constraints.
Once that is done, try moving the disk again.
 
That seems to solve the issue! I resized the qcow2 image to 35433480192 (I think qm resize 132 scsi2 33G would have also worked).

Now Proxmox is also correctly showing the disk size in the GUI (32G instead of 34359738368).

Thank you very much for your help and time!
 
Last edited:
That's great! And it's @fiona you have to thank. She mentioned that issue to me after reading this thread.
 
Today i came up to this issue as well. The goal was to migrate a data disk from azure to a proxmox cluster. After downloading the image to proxmox cephfs, converted it to raw (qm-img convert) and tried to imported to ceph-pool but with no luck. Got an error "qemu-img: output file is smaller than input file", obviously due to reasons explained above in this thread. To bypass the problem, i imported disk to vm through local-zfs, resized the disk from GUI (just a few GBs were enough) and then from GUI did a "move storage" to the ceph-pool without any problem. Thanks to @mira and @tri-dp and of course @fiona
 
Last edited:
  • Like
Reactions: tri-dp

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!