Qcow2 vs. Raw speed

micush

Renowned Member
Jul 18, 2015
73
4
73
After recently having some slowness issues with my VMs, I decided to convert all their disk formats from qcow2 to raw. With raw disks I now have no more slowness issues with any of my VMs.

Even though snapshots are not supported for raw disks on a non-cow filesystem, the speed difference I experienced within the VMs was worth changing disk formats and giving up snapshots. I realize individual requirements for different installations may be unique and require some of the flexibility provided by qcow2, but the speed difference for me was quite surprising.

If you have a VM that is qcow2 and maybe isn't performing as expected, maybe try converting the disk format to raw. It may just speed up your VMs like it did for mine.
 
According to upstream Qemu, the difference is in the 5 % range. If you experiment a higher difference, please share your benchmarks.
 
Perhaps it is a 5% performance difference at creation time. I have no doubt of this.

However, at least for me, over time with daily snapshotting and heavy use qcow2 slowed down considerably.

It used to take me more than a day to backup a 5TB VM on qcow2. Now it takes just a few hours. Nothing changed but the disk format.

I used to get unexplainable breaks in monitoring graphs, and streaming video used to stop during playback from my video server. After conversion to raw neither occur any more.

I've now converted more than 30 VMs over to raw from qcow2, and there is most definitely a difference in VM response time.
 
  • Like
Reactions: Tacioandrade
What's the underling storage for the VM disks? have you tried with zfs? I ask since zfs won't work with qcow, only raw, but snapshots etc is done by zfs itself (not that bad idea after all) and looks like this may work. zfs also promises thin provision. At your disks size (I wonder if these 5Tb are in single disk, btw?) zfs may speed up or slow down things considerable, and can also add up zfs compressing which may be good, too.
I'm in search of the same topic so I'm really appreciate any words you can share on zfs-related topic.
 
The underlying storage is hardware RAID6 with BBU and an EXT4 filesystem served by NFS over 40Gb Infiniband. The largest VM has a 4TB and 2 TB virtio single disk configuration, with other VMs of various disk sizes all configured with virtio single disks.
 
Looks like it should be pretty fast, beside NFS which may be pretty slow if not tuned well (hope not your case).

I'd love to know how the same h/w and VM setup would work with ZFS underneath but ZFS more likely be of local attached disks, not over NFS (or over network, but with iSCSI, not NFS).
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!