First, why do you want this? Having a CoW on a CoW is bad practise.I created a ZFS RAIDZ storage.
What must I do to make this storage support qcow2?
My reason for COW is to be able to take snapshots. This is a lab environment and I will be testing things out. If there is a more efficient way I would consider it.First, why do you want this? Having a CoW on a CoW is bad practise.
If you still want it, I would create a new dataset for your files and configure it as directory storage in PVE. Have you created the zpool by hand or was it the installert?
If it's the latter, just create a dataset with zfs create rpool/vms and add this to your PVE storage at Datacenter -> Storage -> add -> directory
View attachment 63835
ZFS does already support snapshots, but just linear ones, not tree like. You can only switch to a grand-parent-snapshot if you remove the parent. With COW2, you can switch between any snapshot without destroying anything.My reason for COW is to be able to take snapshots. This is a lab environment and I will be testing things out. If there is a more efficient way I would consider it.
ZFS is CoW and QCow2 is also CoW.Also why do you say COW on COW?
I cannot tell if this is the wrong one, I don't know what you want to do.There is nothing and I can start afresh if my approach is not the right one.
QCOW2 is the native format of the used hypervisor KVM/QEMU (QCOW2 = QEmu Copy-on-Write), yet it is no the best ... as with everything ... it dependsI will try out the instructions you sent and try it out. When I read the QCOW2 is so strongly supported in Proxmox I would have imagined that this functionality is enabled out of the box.
Hi,You still have not answered my question about the problems you have with ZFS.
Here is a ZFS-based machine I have:
View attachment 63838
and it supports snapshots:
View attachment 63839
Good that ZFS is not raw, it's ZFS, so snapshots are allowed. Always have been.When I researched the topic I figured out that Snapshots on RAW are not possible and that I had (there could have been other formats) to go to QCOW2.
I have no idea what you did and what not, so please share the output in code tags of the command lsblk and cat /etc/pve/storage.cfg.On the partition that was created automatically during the proxmox install when I move the partition to a directory the QCOW2 conversion became enabled and once the move was completed I could then perform a snapshop.
cluster?My question today was how to enable the DIR functionality on a new ZFS I had just created. The created cluster did not have DIR functionality at inception.
I did and I still have questions ... it does not show the relevant parts.The YouTube video I shared in my post before this one demonstrated how to do it.
Have a look at the video I took and shared in the post. I annotated it.
Good that ZFS is not raw, it's ZFS, so snapshots are allowed. Always have been.
I have no idea what you did and what not, so please share the output in code tags of the command lsblk and cat /etc/pve/storage.cfg.
cluster?
I did and I still have questions ... it does not show the relevant parts.
Good that ZFS is not raw, it's ZFS, so snapshots are allowed. Always have been.
I have no idea what you did and what not, so please share the output in code tags of the command lsblk and cat /etc/pve/storage.cfg.
root@pve1:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 3.6T 0 disk
├─sda1 8:1 0 3.6T 0 part
└─sda9 8:9 0 8M 0 part
sdb 8:16 0 3.6T 0 disk
├─sdb1 8:17 0 3.6T 0 part
└─sdb9 8:25 0 8M 0 part
sdc 8:32 0 3.6T 0 disk
├─sdc1 8:33 0 3.6T 0 part
└─sdc9 8:41 0 8M 0 part
zd0 230:0 0 4M 0 disk
nvme1n1 259:0 0 1.8T 0 disk
├─nvme1n1p1 259:1 0 1007K 0 part
├─nvme1n1p2 259:2 0 1G 0 part
└─nvme1n1p3 259:3 0 1.8T 0 part
nvme0n1 259:4 0 1.8T 0 disk
├─nvme0n1p1 259:5 0 1007K 0 part
├─nvme0n1p2 259:6 0 1G 0 part
└─nvme0n1p3 259:7 0 1.8T 0 part
cluster?
I did and I still have questions ... it does not show the relevant parts.