No QCOW2 on ZFS?

drdownload

Member
Sep 7, 2017
38
1
13
43
If I store an VM Image on an ZFS storage I only get RAW as option for image format.

I assume it's because ZFS supports thin provisioning.
 
No,
with zfs pool plugin you use zvols what is the bock device emulation.
So you have a raw device in qemu.
 
Maybe someone use qcow2 on zfs because of personal reasons like temporary or migration between different storage formats.
 
Maybe someone use qcow2 on zfs because of personal reasons like temporary or migration between different storage formats.

As long as you ZFS as a directory storage, then you can store any file, also qcow2 files. But this is NOT recommended, see above.
 
bring no benefit or features.

Oh yes it does have benefits! Yes, plural!
* you can jump in between snapshots without cloning
* create snapshot trees
Both is unavailable in ZFS, so using cow2 on a zfs-based directory (if you only have ZFS available) is the only way to achieve this.

I use it quite often and never experienced any bottlenecks, yet I know that COW-on-COW has its drawbacks, but the benefits outweigh the drawbacks in my case.
 
  • Like
Reactions: RolandK
I never thought about drawbacks of cow on cow to be honest. Thanks for the clearification!.

Since I only use proxmox with my homelab (at work we use VMware) I maybe switch to ext4 for my proxmox drives.
 
maybe switch to ext4 for my proxmox drives.
Yes, do that :) Storage is depending on features that you need for. For Migrations, copy and testing qcow2 on ext4 is the easiest way.

But for all other things in productive environments with no HW-Raid ZFS or Ceph are real the best :)
With HW-Raid use LVM-Thin.
 
Never do this because qcow is cow and zfs also.
A cow fs on a cow fs will kill the performance and bring no benefit or features.
If use directory and raw format? Are there any disadvantages to this?

Sometimes need to copy the disk to another machine.
 
Since the post isnt marked as solved I thought to jump in and ask a relevant question.
I need to transfer all the VMs from one node to a another one (they are not part of a cluster). Maybe that transfer will happen again sometime in the near future. I am trying to find out all the conversions that are happening (from storage perspective), during that process.

What I ve done so far.
Both nodes have the same remote storage (marked on prox's gui to handle only VM backups) from a truenas, sharing that storage (dataset created on a pool - was there a better way?) via nfs (didn t use iscsi because this storage might be used at the same time from both nodes). The connection between each node and the Truenas is a 10Gbe connection with DAC cables.

Now, since the VMs initially created on proxmox side, the storage dedicated for VMs is thin provisioned with 8k block size and a separated zvol has been created for each VM. If i check from gui (pve-node->VMstorage->VM disks) all disks are in raw format, so everything is good because I think i need everything in raw format.

The Backups now of those VMs are going to be placed in the remote storage from Truenas (dataset->nfs shared way given to the specific node)
and here lies the relevant I think question. During adding a storage (both local and remote) from Datacenter you are asked what kind of things this storage is going to keep. If you choose Backups, does this mean that a directory type of storage is being created on top of zfs automatically by proxmox in order to keep those backups? So do we have qcow2 instead of raw?


Another reason I am asking is because now (with the shared storage), I have the option to go to the VM it self, navigate to it's hardware options, check the disk and by pressing Disk Action I can now move it to the remote storage and from there to the new node, since he has that storage added as well. I am also trying to get why would I want to transfer the VM's disk from raw that it is to a different file format qcow2. Thoughts?
1669725244774.png

All I am trying to achieve is make this migration as much as 1:1 as possible and by 1:1 I mean moving only raw files and not qco2 that have been converted to raw and vis versa. Maybe I have the old process wrong in my mind so this is why I am asking at first place.
 
Last edited:
Both nodes have the same remote storage (marked on prox's gui to handle only VM backups) from a truenas, sharing that storage (dataset created on a pool - was there a better way?) via nfs (didn t use iscsi because this storage might be used at the same time from both nodes). The connection between each node and the Truenas is a 10Gbe connection with DAC cables.
Best way for remote ZFS is to use ZFS-over-iSCSI, but you have to have support for that. I don't know if that's the case for TrueNAS. I'm running this setup on Debian and it works fine - no complicated setup or anything like that because the storage is already ZFS backed (one zvol for each VM), Snapshot capability and configuration free online migration.

The Backups now of those VMs are going to be placed in the remote storage from Truenas (dataset->nfs shared way given to the specific node)
and here lies the relevant I think question. During adding a storage (both local and remote) from Datacenter you are asked what kind of things this storage is going to keep. If you choose Backups, does this mean that a directory type of storage is being created on top of zfs automatically by proxmox in order to keep those backups? So do we have qcow2 instead of raw?
PVE does not see the ZFS, it sees the NFS. You will not have any virtualization-related features like snapshots for this kind of storage.

I am also trying to get why would I want to transfer the VM's disk from raw that it is to a different file format qcow2. Thoughts?
Snapshots. RAW cannot be snapshotted.
 
Thank you for your quick reply

PVE does not see the ZFS, it sees the NFS. You will not have any virtualization-related features like snapshots for this kind of storage.
Ok but NFS is the share protocol, on top or underneath that is zfs from Truenas (by the way scale - so Debian based).
Snapshots. RAW cannot be snapshotted.
Possibly true but then again why do I have the option from GUI for each VM to back it up to the remote storage (nfs one) and still being able to choose all available backup options like ... see below (possibly you are talking about another kind of snapshot in another menu instead of backup?)
1669728048901.png
 
Last edited:
Ok but NFS is the share protocol, on top or underneath that is zfs from Truenas (by the way scale - so Debian based).
So it may work with the integration. It depends on the choosen iSCSI implementation.

Possibly true but then again why do I have the option from GUI for each VM to back it up to the remote storage (nfs one) and still being able to choose all available backup options like ... see below (possibly you are talking about another kind of snapshot in another menu instead of backup?)
Yes, it is unfortunatelly also called "snapshot", but not the same kind as a storage "snapshot". Snapshot in the backup menu means, that QEMU creates an internal and temporal snapshot of your virtual disk, backes it up and then deletes it to create a "more consistent" backup. A storage snapshot is created on the storage and can easily be rolled back if needed and is independent of the state of the VM.
 
So it may work with the integration. It depends on the choosen iSCSI implementation.
AFAIK no one provides this for proxmox- It would be possible to write a shim for trunas using their API, but ultimately that would still be an imperfect solution. the only workable solution is lvm on the iscsi target, unless you're both brave and have unlimited dev time to deploy gfs/ocfs.
 
the only workable solution is lvm on the iscsi target, unless you're both brave and have unlimited dev time to deploy gfs/ocfs.
I wouldn t say the only, since yesterday finished with all the tests I wanted. Shared storage (even for backups and not for Vms themselves) solves many problems when you don t want to have a cluster of 3 (or 2 +1 voting guy) servers.

I was able to backup VMs and their extra added storage to a shared storage pool and restore them to the other node (having the same shared pool where backups exists).
I could afterwards reassign that extra storage to other Vm or move it to other pool (useful when working with Active Directory shared files or SQL log / data / databases).

So it seems to me that it worked. Now I don t know about the speed I managed to achieve during each of this process (even with a 10Gbe connection) but this is a problem for another post.
 
Last edited:
I wouldn t say the only, since yesterday finished with all the tests I wanted. Shared storage (even for backups and not for Vms themselves) solves many problems when you don t want to have a cluster of 3 (or 2 +1 voting guy) servers.
two different applications, although you can get pretty decent speed with qcow2 over nfs (depending how good your storage subsystem is on the filer.) if that works for you, great :)

--edit I just noticed the thread name. you wont get very good performance writing qcow2 to a zfs backed storage. if you end up going that route, use hw raid/lvm for best results.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!