Using ZFS with qcow2

thiagotgc

Active Member
Dec 17, 2019
151
21
38
37
I have an environment with PVE 7.3, where I have disks in zfs.
I mounted it in PVE as a directory as I currently use qcow2!

However, I always used it in qcow2 format for the ease of snapshots.

The question is.

Do I lose a lot of performance using qcow2 on zfs storage?

What is the right way to get the best result and practicality?
 
Do I lose a lot of performance using qcow2 on zfs storage?
Jup. And SSD wear in case you use SSDs.
What is the right way to get the best result and practicality?
If you care about performance or SSD life expectation then converting the qcow2 files to raw format zvols would be the way to go. But then you have to deal with the limitation of ZFS snapshots, which will wipe all data newer than the snapshot you rollback to.
So you will have to choose if those qcow2 snapshots are worth the additional performance and financial costs.
 
However, I always used it in qcow2 format for the ease of snapshots.
You realize zfs has the same functionality? also, since its intrinsic to the design and toolset of zfs, its snapshot facility is more robust and capable.

Do I lose a lot of performance using qcow2 on zfs storage?
Consider what you're proposing: a Cow file system on TOP of a CoW filesystem. If its not clear, write amplification would DECIMATE any performance (and as @Dunuin said, the life of your storage device.)
 
The more you nest virtualization layers, filesystems, storages and so on the more overhead you will get. This increases the write amplification, so more is written to the SSDs and because of that they will wear faster. With qcow2 you got another additional filesystem that could be avoided by using zvols.
CoW got terrible overhead and as overhead will multiply and not add up, it can get really bad when running a CoW filesystem on top of another CoW filesystem.
 
Last edited:
Specifically in this environment, there are two servers clustered with all SATA HDD disks... Powered by Xeon 1270 v6 @3.8GHz 64GB Ram and 2x8TB

In this environment I get 4gbs in a simple dd test...

While wear and tear is not an issue, I want the best performance and trust environment with ZFS
 
Last edited:
In this environment I get 4gbs in a simple dd test...
And probably it drops from GB/s to KB/s when hitting it with random 4K sync writes using fio. Really depends on your workload.

But at least you don't have to care about SSD wear then.
 
>With qcow2 you got another additional filesystem that could be avoided by using zvols.
>CoW got terrible overhead and as overhead will multiply and not add up, it can get really
>bad when running a CoW filesystem on top of another CoW filesystem.

could you describe what terrible overhead you see @Dunuin ?

i don't see any significant difference in amount of IO when writing to qcow2. ok , the sync overhead may not be neglectible, but the advantages of using qcow2 on zfs outweight for me...

have a look at https://www.usenix.org/system/files/conference/fast16/fast16-papers-chen-qingshu.pdf

furthermore, zvols have their issues, too https://github.com/openzfs/zfs/issues/8472
 
And probably it drops from GB/s to KB/s when hitting it with random 4K sync writes using fio. Really depends on your workload.

But at least you don't have to care about SSD wear then.
I use Next/Owncloud for Online Storage]
It has a lot of file synchronization process online, web and etc...
Performance is extremely important...
 
i don't see any significant difference in amount of IO when writing to qcow2.
is that right? perhaps you'd care to post some benchmarks?

ok , the sync overhead may not be neglectible, but the advantages of using qcow2 on zfs outweight for me...
What advantages are you referring to?
Have you actually READ those links?

The paper you refer to described how they mitigate the obvious problems of a filesystem based cow blob, and why "Q" (as in quick) cow format mitigates some of them. At no point do they describe using a cow filesystem UNDERNEATH:
We conducted the experiments on a machine with a 4-core Intel Xeon E3-1230 V2 CPU (3.3GHz) and 8GB memory. We use 1 TB WDC HDD and 120G Samsung 850 EVO SSD as the underlying storage devices. The host OS is Ubuntu 14.04; the guest OS is Ubuntu 12.04. Both guest and host use the ext4 file system. We use KVM [10] and configure each VM with 1 VCPU, 2GB memory, and 10GB disk. The cache mode of each VM is writeback, which is the default setting. It has good performance while being safe as long as the guest VM correctly flushes necessary disk caches [20]
never mind that this article is describing technology from 8 YEARS AGO.

As for the zvol issue you link, it was handled In 2019.

Look, no one has vested interest in robbing you of perceived features and performance. you want to deploy qcow2 on zfs- go nuts.
 
is that right? perhaps you'd care to post some benchmarks?

https://jrs-s.net/2018/03/13/zvol-vs-qcow2-with-kvm/
https://forum.level1techs.com/t/zvo...mance-difference-on-nvme-based-zpool/182074/7

>What advantages are you referring to?

- being able to easily shuffle around a VM including all it's snapshots (as it's not tied/linked to the host's filesystem)
- handling files instead of block devices is much more straightforward/easy
- using zfs replication tools like syncoid simply works (whereas it doesn't work with zvols)

>As for the zvol issue you link, it was handled In 2019.

somebody in this thread even calls them "pathologically broken by design"

https://github.com/openzfs/zfs/issues/11407

(whereas my own experience with stalls/lockups being solved and that was NOT related to zvols but to qemu locking)
 
This. is. OLD. perhaps you can offer your own... also, these are not for qcow ON zfs. if you were arguing AGAINST zfs this would have merit.

This isnt even for what we're talking about. ext4 is faster then zfs- but thats cause ext isnt cow; apples and oranges.

Why are you just doing google searches looking for what is backing your preconceived notions?? I dont care if you're right or not; I thought you were asking for advice.

- being able to easily shuffle around a VM including all it's snapshots (as it's not tied/linked to the host's filesystem)
easily done with zvols. snapshots aren't linked to a host filesystem in any case- zvols dont belong to the host, they belong to the file system.
- handling files instead of block devices is much more straightforward/easy
handling zvols is just as easy. its also safer then handling files.
- using zfs replication tools like syncoid simply works (whereas it doesn't work with zvols)
zfs send/receive is more robust, faster, and more able then syncoid. if you must use a shell on top of it, sanoid is a thing (read: syncoid for zfs)
somebody in this thread even calls them "pathologically broken by design"
opinions are like assholes... well, you know how that goes.

zfs is superior BY DESIGN to a seperate lvm/file system. qcow exist as a stopgap for using non lvm aware file systems, and should not be used when you have alternatives. This is not to say that zfs is perfect or that its ideal for all use cases (search the forum, there are a bunch of threads on the subject) BUT putting a copy on write filesystem ON TOP OF IT is not wise by any metric.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!