Cannot move subvol-based CT(s) from zfs or local storage to ceph rbd or nfs

jinjer

Renowned Member
Oct 4, 2010
204
7
83
I'm having issues moving CT root fs from local storage (zfs) to ceph rbd.
The problem only occurs on existing CT, created some time ago. At that time I was using proxmox 4.x
The error does not occur on new CT created with proxmox 6.2, using ubuntu 20.x template.

EDIT: It seems the issue is due to subvol-based CT where the size of the disk is determined as being 0 (see post #2 and #3). The raw-based CT work ok.

My setup is a 6 node proxmox cluster, on which 3 nodes are ceph hyper-converged nodes. The 4th node has connection to the ceph storage.

The problem seems to be an error on creating the rbd device, so it cannot be formatted:

Code:
/dev/rbd0
mke2fs 1.44.5 (15-Dec-2018)
mkfs.ext4: Device size reported to be zero.  Invalid partition specified, or
    partition table wasn't reread after running fdisk, due to
    a modified partition being busy and in use.  You may need to reboot
    to re-read your partition table.

Removing image: 100% complete...done.
TASK ERROR: command 'mkfs.ext4 -O mmp -E 'root_owner=0:0' /dev/rbd/vmd/vm-107-disk-0' failed: exit code 1

Here's is the only line in kernel log I get:
Code:
Aug  8 10:57:59 node4 kernel: [321974.128498] rbd: rbd1: capacity 0 features 0x3d
I tried also migrating the rootfs of the CT(s) from one local storage (ZFS) to the classic LVM storage (/var/lib/vz). This works. Then migrating from the LVM storage to the rbd device gives the same error. This is true for multiple CTs.

Backup-restore on top of the rbd device works normally. It's only moving the rootfs that has issues. Not sure if this is a bug or something wrong with my installation.

I installed the ceph cluster from the proxmox gui so I'm not sure what to check. Is there a way to get a detailed debug log from the operations performed by proxmox so I can try to understand what the error is?



thank you.
 
Last edited:
A small update: So far, the only difference I found between the CT that can migrate and those who can't is that the onces that can are stored as "RAW" images, while those who can't are stored as "subvol".

There's also an issue when moving disks from subvol local storage to zfs local storage (zfs-native). In this case, the new zfs fs cannot be created. The error is
Code:
TASK ERROR: zfs error: cannot create 'rpool/vm/subvol-109-disk-0': out of space
However, this is a 16G CT and there's an excess of 2TB on the ZFS storage.

Perhaps this is a bug in proxmox.

I investigated further. a "pct rescan" on the node will update the size of the root disk. I get size=0T.
If I manually fix it to 16G (for example) it will get converted back to 0T as soon as I do pct rescan.
Also, when trying to migrate the same CT on a NFS based storage, I get the same error as on RBD
For whatever reason proxmox thinks the size of the disk is 0 and does not allocate enough space.

Code:
Formatting '/mnt/pve/nfstest/images/107/vm-107-disk-0.raw', fmt=raw size=1
mke2fs 1.44.5 (15-Dec-2018)
mkfs.ext4: Device size reported to be zero.  Invalid partition specified, or
    partition table wasn't reread after running fdisk, due to
    a modified partition being busy and in use.  You may need to reboot
    to re-read your partition table.

TASK ERROR: command 'mkfs.ext4 -O mmp -E 'root_owner=0:0' /mnt/pve/nfstest/images/107/vm-107-disk-0.raw' failed: exit code 1

Why?
 
Last edited:
Update: I can confirm that subvol-based CT either on zfs-native filesystems or local storage (directory) filesystem (backed by zfs in my case) cannot have their rootfs moved to either ceph rbd or nfs backed storage.
The issue seems to be that the size of the disk is determined as 0. The resulting block device on rbd/nfs is created as 0 size hence the subsequent format fails.

These CT were created somewhere in proxmox 4.4 as subvols on zfs based storage (zfs-native, each subvol).

Newer CT created as RAW images even when residing on zfs based storage work properly.

I have not dug further but this certainly looks like proxmox bug. perhaps an "unsupported" configuration that was once OK and not anymore?

A work-around of backup/restore works however requires a significant down time for bigger CTs (twice as much as normal). Also an incremental rsync based solution on manually created raw device works but is a pita and prone to errors.

Please confirm this is a bug and can be fixed. Thank you.
 
after some debugging, I found that the "move" routines in proxmox make use of the refquota parameter in the zfs subvol and not of the size of the disk in the ctx conf file.

if the refquota is "none" as in unlimited space, this is interpreted as 0 quota. the rbd device is created with 0 size the the rest follows.
for some reason, send/receive the subvol from one node to another will lose the refquota setting (perhaps an issue in my installs).
setting the refquota by hand fixes the issue and allows to migrate disks to the rbd/nfs (I guess iscsi etc) based storage(s)

thanks for reading till here.
 
  • Like
Reactions: Alwin
Proxmox VE always sets the refquota and it relies upon knowing the size of the images. As you have seen. :)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!