Incoherenet disk size on NAS and in proxmox

Apr 29, 2021
35
6
13
47
Hi.
I have a backup vm (veeam) with a variety of disks, which one of them is located on an NFS share.
It was about 20 TB large, residing on a NAS. I decided to expand the pool on the NAS, in order to be able to expand the disk. I expanded it from about 20TB to 41TB.
When i got about to expand the qcow disk in pve, it failed with a timeout, but the disk was actually resized when I look at the qcow2 size in the VM config in the GUI.
When I look in the folder at the NAS (and the /mnt/pve folder), the qcow2 file, however, is only the original size.

So. the GUI shows a larger size than ls -lah in the /mnt folder. The VM is happy to extend the partition on this qcow2 file to way beyond 20TB

I feel that this might be a problem when reaching 20 TB in the VM - is that correct?

Is there any waay I can rectify it, or should I not worry?

I've restarted the VM, detached the disk and attached it again. Still reads the larger size in pve and the smaller size on the qcow2 file on the NAS.

VM defintion (the trouble disk is.. Trouble_disk. :) ):

agent: 1
balloon: 0
boot: order=virtio0;ide0
cores: 12
cpu: host
hotplug: disk,network,usb
ide0: none,media=cdrom
machine: pc-i440fx-9.0
memory: 81920
name: VEEAM01
numa: 0
onboot: 1
ostype: win10
scsihw: virtio-scsi-pci
smbios1: uuid=******************************
sockets: 1
startup: order=1
tags: internal
virtio0: *************_LUN01:vm-100-disk-1,size=170G
virtio1: ***********:vm-100-disk-0,backup=0,size=14T
virtio2: Trouble_disk:100/vm-100-disk-0.qcow2,backup=0,size=34000G
virtio3: ***************_LUN01:vm-100-disk-0,backup=0,size=40000G
vmgenid: ***************************

1739522165423.png
 
Last edited:
Have you checked the QCOW metadata directly? : qemu-img info [file]

It is possible that storage timeout happened at the most inopportune moment, and it was not caught by PVE error handling. Depending on what the actual QCOW thinks of itself, you may need to correct the config file and retry.

Additionally, what does the VM OS show for the raw disk structure?
If it is a Linux:
lsscsi -ss
lsblk
fdisk -l


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Here is the qcow metadata:

image: vm-100-disk-0.qcow2
file format: qcow2
virtual size: 33.3 TiB (36614596198400 bytes)
disk size: 18.7 TiB
cluster_size: 65536
Format specific information:
compat: 1.1
compression type: zlib
lazy refcounts: false
refcount bits: 16
corrupt: false
extended l2: false
Child node '/file':
filename: vm-100-disk-0.qcow2
protocol type: file
file length: 19.2 TiB (21079536915456 bytes)
disk size: 18.7 TiB

It's a windows box, and it happily expanded the volume and partition to around 31TB.
Am I in trouble here? :)
 
To be frank, I don't know what possible state you are in. Based on the data you supplied - it is definitely inconsistent.
It is possible for QCOW to be thinly allocated and to grow as needed, i.e. only be as large as actual data in it. Additionally, the data is zlib compressed.

Whether you want to move your data to a more consistently presented disk is up to you.



Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
  • Like
Reactions: markusbernhard
Thanks for your replies.
I think I got my questions answered by the VM. It just stopped responding when right-clicking on the disk, and disk manager wouldn't open.
Just great on a friday afternoon :D
 
  • Like
Reactions: bbgeek17