Size of disk image

FanHi

New Member
Jun 19, 2020
4
0
1
34
Hi,

I'm new to this forum and generally a bit new to proxmox.
I like it very much so far but one thing is just beyond my understanding.

I have a disk image that's size behaves weirdly and I can't figure out why.

The qcow2 disk image is stored on a local hard drive and is hooked up to the vm via a virtio scsi controller.
It was created with a 2TB size and the discard option is enabled.
Inside the guest I formatted the whole 2TB with an ext4 filesystem and the current usage is about 307GB.
The actual size of the disk image file as viewed from the host is also about 307GB and the output of qemu-img info reads:

file format: qcow2
virtual size: 0.977 TiB (1073741824000 bytes)
disk size: 307 GiB
cluster_size: 65536
Format specific information:
compat: 1.1
lazy refcounts: false
refcount bits: 16
corrupt: false

What I don't understand from this is the virtual size of 1TB.
The output of qm config is:

scsi1: local10TB:111/vm-111-disk-0.qcow2,discard=on,size=2000G
scsihw: virtio-scsi-pci

What's really interesting for me is when I look in the GUI - Storage - Content, the disk image size is 2TB.
Also if I do a backup of the vm, it processes the whole 2TB. The actually backed up data is only 300GB since it recognizes the 1.7TB are just sparse, but the backup still takes more than an hour as it has to read the full 2TB.
I already manually ran fstrim from within the guest (Ubuntu 20.04) twice with a guest reboot in between and it did trim 1.7TB (why though?) but the disk as viewed from the host is still the same.

Please point me to my misconception.

Thank you all very much in advance

Best regards

FanHi
 
What I don't understand from this is the virtual size of 1TB.
Hm, could be a bug in QEMU, not sure. I'll look into it.

Also if I do a backup of the vm, it processes the whole 2TB. The actually backed up data is only 300GB since it recognizes the 1.7TB are just sparse, but the backup still takes more than an hour as it has to read the full 2TB.
Well, yes, because it actually does have to read all 2TB. The backup process can't know which parts are sparse and which aren't, so it has to read the entire thing. Once it detects a sector as being all zero it will fast-path the write, but the it still has to the read to know.

I already manually ran fstrim from within the guest (Ubuntu 20.04) twice with a guest reboot in between and it did trim 1.7TB (why though?)
Why not? TRIM is a very fast operation in general, so it makes sense to just TRIM everything not used. SSD controllers (or QEMU in this case) are smart enough to handle it.
 
Well, yes, because it actually does have to read all 2TB. The backup process can't know which parts are sparse and which aren't, so it has to read the entire thing. Once it detects a sector as being all zero it will fast-path the write, but the it still has to the read to know.

I see... I was under the impression, that the way the data is actually stored to physical disk would be somewhat "sequential" and the image-file would then grow as more physical storage space is used. Because somehow du -hs ./img.qcow2 seems to recognize that there is some end to the image file. So I assumed, that the backup-job would also just process the actual "physical" image file.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!