ZFS Single Disk Setup - Best Practice for longevity and efficiency

Please share those results. What file system is used inside the VM? What's the VM config look like? What about the storage?
i have 3 VM, two of them running docker environment; this is what happen if i perform some stuff: (on top PVE, bottom VM Guest (id 151))

1757236405929.png


this is just crazy, and i saw also worster results
i'm seriously considering to perform a fresh install with ext4... or to detach disk passthourgh to VM, add to PVE, and use them for VM disks?!

actually my VM has guest OS on local pve zfs disk and a ssd passthrough for data...
what do u think?
 
Last edited:
Yeah that does look bad when compared like this but the compression might make iotop-c a bit inaccurate here but I'm not sure.
Then there's also different commit intervals and lots of other things which can complicate this. It's why I didn't go into depth about write amplification.
You can see in my pictures that 15M of writes inside the VM amounted to just 20M on the node. I'd also check with zpool iostat.
 
Last edited:
Yeah that does look bad when compared like this but the compression might make iotop-c a bit inaccurate here but I'm not sure.
I'd also compare it with zpool iostat.
what do you think about my workaround?
reinstall ext4 or move VM data to ext4 ssd nvme disk attached to pve?
trying to find a sense to stay on zfs single disk with vm data outside them...

Then there's also different commit intervals and lots of other things which can complicate this. It's why I didn't go into depth about write amplification.

looking at nvme smart-log /dev/nvme0 it seems TBW increase with same value seen in iotop-c...
 
Last edited:
You can just move the virtual disk that the VM uses to a LVM-Thin storage if you have multiple disks and see if that works better. Personally I just buy used DC drives and never really think about writes that much. I occasionally check, of course, but I'm not really concerned about them dying.
 
Last edited:
  • Like
Reactions: Johannes S
You can just move the virtual disk that the VM uses to a LVM-Thin storage if you have multiple disks and see if that works better. Personally I just buy used DC drives and never really think about writes that much. I occasionally check, of course, but I'm not really concerned about them dying.
after deeping dive on my setup i think i will go for this:
format my whole 512gb drive
partition 1: zfs for pve - 80gb
partition 2: dedicate for lvm thin non-zfs - rest of disk

i know is not the best to not dedicate whole disk to zfs, but is the only way... that's also a test for try to recover pve installation from external storage

i don't wanna lose zfs replication and snapshot feature, and i cannot use other disks
 
Last edited:
wow the process was so easy and smooth but painful but because of some inexperience on my side

because of moving data storage on lvm partition, i also tested zfs backup (done with replication) and i confirm that zfs for root is the great and right choose, restore very easy and 100% accurate! no headcache to restore of file, folder, packets and so on, the system is perfectly the snapshot before wiping disk

so basically:

start pve installer
configure 75GB space on zfs raid 0 (remaining space is formatted later with fdisk to be used on lvm thin, now the installer leave it unallocated)
install pve and reboot

- live boot the installer in debug mode:

zfs import rpool
zfs import backup_usb
zfs send -R backup_usb/ROOT/pve-1@snap-07_09_25_02:45 | zfs receive -F rpool/ROOT/pve-1
zfs set bootfs=rpool/ROOT/pve-1 rpool
zfs set mountpoint=/ rpool/ROOT/pve-1
zfs export rpool
zfs export backup_usb
reboot

now pve is so fresh and same!

starting restoring PBS manually (pct restore 100 /zfs-data-1tb/Backup/proxmox/dump/vzdump-lxc-100-2025_09_01-00_00_00.tar.zst --storage local-zfs --force) and from PBS so easy to restore all other vm

now i'm going to monitore TBW with this setup

i'm very happy with this DR strategy
 
Last edited: