Thin Provisioning QCOW2 Not Working

deltaend

New Member
Jan 21, 2013
12
0
1
So, I setup some basic clustering with Proxmox 3. Nothing fancy yet, no shared storage or block storage yet but I have the ability to migrate VM's from one server to another. Now, my problem is that when creating a VM disk, QCOW2 format is no longer thin provisioned. I'm not sure if this is because of Proxmox 3 vs 2.2 or because I've setup clustering on the server. Either way, it's frustrating because migrating a 500GB file from one server to another takes way too long. When testing, it seems that the only image format to successfully thin provision is VMDK.

Am I missing obvious something here? Thanks.
 
I hate to do this, but... bump.

Am I seeing normal operation of Proxmox or is there some way to fix this?
 
So... this must be new since I am running older version of Proxmox that don't pre-allocate.

According to Proxmox staff... Mr. Dietmar, can thin-provisioned VMDK run as quickly as QCOW2 images?
 
We pre-allocate qcow2 files for performance reasons.

Hi dietmar

But if into the GUI of PVE when I create 1 VM with a Virtual Hard Disk in qcow2 format, in theory it is assumed that the file "qcow2" should be as small as possible, but in practice the size equals the size chosen at time of creating, and obviously it will take more time to complete the backup. For this reason I have the habit of creating virtual disks of 1 GB. and then resize to the size that i finally i want have it.

Question:
I think that into the PVE GUI should be arranged this so that somebody when create a virtual hard disk in format "qcow2", finally this file will have the smallest size that can be, ie "0 bytes". Do not you think?

In any case, congratulations to the entire PVE development team for making of PVE an excellent product, and also taking the time to dispel our doubts in this forum.

Best regards
Cesar
 
Last edited:
Last edited:
yes, we changed this some time ago as the performance is much better.



I have no numbers for vmdk, as we do not intensively test vmdk. but only with qcow2 you can use http://pve.proxmox.com/wiki/Live_Snapshots.

Ok, here is my problem. I know that pre-allocation is the best performance, but I don't always care about top performance when creating a virtual machine. When density is more important than performance, I would really like to be able to deploy QCOW2 from the GUI without having the data pre-allocated. Of course we will need to do Live Snapshots with QCOW2 because doing backups and live migrations takes so insanely long when the data is pre-allocated. To fix, here are my suggestions and they should be easy to implement.

  1. Offer a checkbox in the GUI to pre-allocate space or not (checked by default).
  2. I hear that QCOW2 format needs some TLC every so often due to it growing in size. Perhaps a maintenance routine which could be scheduled for all VM's.
  3. A utility to convert thin provisioned QCOW2 VM's to pre-allocated and back again. Perhaps the maintenance routine could be built into this menu.

What I'm asking for is already build into Hyper-V, Vmware, etc...
 
here is an expample of a qcow2 disk, created via gui and nothing store into it.

Code:
root@hp1:/var/lib/vz/images/114# ls -alh vm-114-disk-1.qcow2
-rw-r--r-- 1 root root 33G Jul  9 16:37 vm-114-disk-1.qcow2

Code:
root@hp1:/var/lib/vz/images/114# du -h vm-114-disk-1.qcow2
5.5M    vm-114-disk-1.qcow2

we preallocate metadata, see the log of this creation:
Code:
[COLOR=#000000][FONT=tahoma]Formatting '/var/lib/vz/images/114/vm-114-disk-1.qcow2', fmt=qcow2 size=34359738368 encryption=off cluster_size=65536 preallocation='metadata' lazy_refcounts=off [/FONT][/COLOR]

Hope this is more clear.
 
If you elect for RAW, you won't have the ability to do migration (or snapshots I think).
I believe the best way to think of it is:
Raw is like Direct to Disc. No features, no management, straight storage.
QCOW2 is low overhead storage with features such as live migrations, snapshots, toosl, and management.

Not sure on all the features but I have been pulling from different KVM sites on RAW vs QCOW2
 
It would be safe to answer this question as:
RAW is the best for performance, however, QCOW2 is probably the safest.
http://pve.proxmox.com/wiki/Installation#Hard_disk

The trade-off is not that much different according to some benchmarks
http://www.linux-kvm.org/page/Qcow2

The next issue would be which cache for which circumstance. And if you have a NAS with cache through ZFS NFS, how does QCOW2 cache play into the workflow?
 
here is an expample of a qcow2 disk, created via gui and nothing store into it.

Code:
root@hp1:/var/lib/vz/images/114# ls -alh vm-114-disk-1.qcow2
-rw-r--r-- 1 root root 33G Jul  9 16:37 vm-114-disk-1.qcow2

Code:
root@hp1:/var/lib/vz/images/114# du -h vm-114-disk-1.qcow2
5.5M    vm-114-disk-1.qcow2

we preallocate metadata, see the log of this creation:
Code:
[COLOR=#000000][FONT=tahoma]Formatting '/var/lib/vz/images/114/vm-114-disk-1.qcow2', fmt=qcow2 size=34359738368 encryption=off cluster_size=65536 preallocation='metadata' lazy_refcounts=off [/FONT][/COLOR]

Hope this is more clear.

Would it be possible to do migrations via rsync with compression on? This would allow a 500GB blank file to be migrated at GBps speed.

Again, it would be nice to have pre-allocation options in the GUI.
 
here is an expample of a qcow2 disk, created via gui and nothing store into it.

Code:
root@hp1:/var/lib/vz/images/114# ls -alh vm-114-disk-1.qcow2
-rw-r--r-- 1 root root 33G Jul  9 16:37 vm-114-disk-1.qcow2

Code:
root@hp1:/var/lib/vz/images/114# du -h vm-114-disk-1.qcow2
5.5M    vm-114-disk-1.qcow2

we preallocate metadata, see the log of this creation:
Code:
[COLOR=#000000][FONT=tahoma]Formatting '/var/lib/vz/images/114/vm-114-disk-1.qcow2', fmt=qcow2 size=34359738368 encryption=off cluster_size=65536 preallocation='metadata' lazy_refcounts=off [/FONT][/COLOR]

Hope this is more clear.


I must admit to initially misunderstanding this. The qcow2 file is allocated the full amount of space, e.g 1TB but it only takes up as much as is actually used? e.g 7-12GB for a base Windows 7 install. Still sort of like thin provisioned but better performance?

I tried allocating two 512GB disks on 128GB of storage and it worked fine and in fact ProxMox shown usage on the storage as only being few a few GB - about what I would have expected for thinly provisioned disk. This is much better than what I thought, which was that all the space was used and unavailable.

What happens when actually usage exhausts the storage? the VM's get write errors?

How about disk fragmentation in the VM's? does this stop space being reclaimed on the storage when files are deleted in the VM?

Thanks - Lindsay

p.s I'm very impressed with the speed of taking snapshots - *much* faster than XenServer or vSphere.
 
"The qcow2 file is allocated the full amount of space"
Not entirely true. The image has reserved that amount of space from the file system. It only allocates exactly what is needed at any given time. So basically this is like thin provisioning.
 
This concept is called a sparse file on linux. the specialty is that, as tom showed, "ls" will show the full/maximum size of it, whereas only "df" will show how much space the file actually currently occupies.

This means that whatever you use for backups, needs to handle sparse file properly. rsync has --sparse for that, other things may require the use of a search engine you distrust the least. but in any case, sparse is the keyword youre going to want to include in your research
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!