Windows and FreeBSD guests: qcow2 vs raw?

alexc

Renowned Member
Apr 13, 2015
139
4
83
Recently my colleagues asked me to set up several Windows Server (they need both 2008R2 and 2019) and FreeBSD (10 and 12.1) on our Proxmox hosts. The host server is brand new server free of anything so we go with 6.2 install, and it has h/w raid with LVM and ext4 on it (yes, old-schooler, I know, but this is our old-time approach) as storage.

Now when it comes to set up the VM I need to choose if we go with qcow2 or with raw. I found a bit outdated page at https://pve.proxmox.com/wiki/Performance_Tweaks but looks like several years has passed so may there be any changes?

qcow2 is said to be better for backup and snapshots and appears to be better optimized for VMs. raw is something very similar to physical block device so non-linux guests may be happier with it (?). The controller I would like to use is VirtIO SCSI .

Please advice, which format for Windows and FreeBSD VM disks to choose!
 
raw is direct access to a hard disk device like /dev/sdx, it will be faster then qcow2.

qcow2 supports snapshots and thin provisioning which raw does not.

As you went with ext4 with hw raid i would only go with qcow2.
 
raw is direct access to a hard disk device like /dev/sdx, it will be faster then qcow2.

qcow2 supports snapshots and thin provisioning which raw does not.
Yes, this is why I used to run qcow2 to run linux VMs )

So to say, I wasn't able to find any numbers how big is the difference. If it is 1-3%, then no problem, if it is 10-30% then the choice is obvious. Maybe you have any calculations on that?

I can only find rumors on that, and I also heard qcow2 better in guest OS trim support (if any). In contrary raw won't need trim support as it is preallocated and each block mapped to storage 1:1.
 
Last edited:
Yes, this is why I used to run qcow2 to run linux VMs )

So to say, I wasn't able to find any numbers how big is the difference. If it is 1-3%, then no problem, if it is 10-30% then the choice is obvious. Maybe you have any calculations on that?

I can only find rumors on that, and I also heard qcow2 better in guest OS trim support (if any).

raw is about 10% faster https://pve.proxmox.com/wiki/Qemu/KVM_Virtual_Machines

but containers require the snapshot feature to create backups while running, keep that in mind
 
raw is about 10% faster https://pve.proxmox.com/wiki/Qemu/KVM_Virtual_Machines

but containers require the snapshot feature to create backups while running, keep that in mind

Thank you for pointing that number, really. 10% is a number to consider!

Windows is plain Terminal Server host (many users at once - we try to test how our software is run in Terminal under load), while FreeBSD is used to run Postgres database. Seems to me raw be better choice (no containers be there anyway).

But wait, to run PVE backup as snapshot - I'll need qcow2 (and no way raw), right?
 
Thank you for pointing that number, really. 10% is a number to consider!

Windows is plain Terminal Server host (many users at once - we try to test how our software is run in Terminal under load), while FreeBSD is used to run Postgres database. Seems to me raw be better choice (no containers be there anyway).

But wait, to run PVE backup as snapshot - I'll need qcow2 (and no way raw), right?

kvm snapshot backups should work regardless
 
I really have to say that I always hate it if I need to work with virtual machines that do not offer snapshots. The one good thing about virtualization is exactly that, the main and only big advantage over bare metal. If you really need to squeeze out the last bit of power, don't virtualize, run on bare metal, especially with ZFS if you use PostgreSQL. You will gain a lot of performance with running ZFS, because there are features included in ZFS than will boost up your PostgreSQL database a lot.
 
I really have to say that I always hate it if I need to work with virtual machines that do not offer snapshots. The one good thing about virtualization is exactly that, the main and only big advantage over bare metal. If you really need to squeeze out the last bit of power, don't virtualize, run on bare metal, especially with ZFS if you use PostgreSQL. You will gain a lot of performance with running ZFS, because there are features included in ZFS than will boost up your PostgreSQL database a lot.
I played with ZFS a bit and found you need to be quite proficient with it and understand what'll be with it over next several years (say, ZFS never care for fragmentation but suffers when its rate grows, too). So a bit risky, while quite like magic FS, really.
 
I played with ZFS a bit and found you need to be quite proficient with it and understand what'll be with it over next several years (say, ZFS never care for fragmentation but suffers when its rate grows, too). So a bit risky, while quite like magic FS, really.

Yes, ZFS has a steep learning curve, but if you're lucky, you will never have to learn another filesystem after you've learned ZFS. It's not like with all other filesystems, that needed to be updated after bigger disks came out, ZFS is already 128-bit, so there is currently not enough energy on this planet to drive all potential disks to reach this limit.

BTW: ZFS, or more precicely zvol, has the advantages of qcow2 and raw and the drawback of having only linear snapshot history, whereas qcow2 has a tree-like snapshot history.