Hello,
I have been running two different setups of ZFS Pools with Proxmox and VMs on them.
Cheers
I have been running two different setups of ZFS Pools with Proxmox and VMs on them.
- Raid-z2 (6 disks, ashift=12, volblocksize=64k) <-- Virtualdisk (reports blocksize as 512 in VM) <-- NTFS/Ext4 (blocksize 4k)
- Mirror (6 disks, ashift=12, volblocksize=8k (default)) <-- Virtualdisk (reports blocksize as 512 in VM) <-- NTFS/Ext4 (blocksize 4k)
- ZFS size / allocated difference?
- Improve write amplification?
- ashift, volblocksize, clustersize, blocksize
- ZFS RAIDZ stripe width, or: How I Learned to Stop Worrying and Love RAIDZ
- RAID-Z parity cost
- Please help me understand ZFS space usage
- Why does the virtual disk always report as 512b blocksize?
- Does this not matter because of some trickery in VirtIO/KVM?
- Why is the Virtual drive not using the set blocksize of the storage its on?
- Why is there no setting on a per VM/per virtual drive basis for blocksize?
- Why has 8k volblocksize been choosen as a default for Proxmox (potentially risking a +200% space usage on ZFS, according to RAID-Z parity cost)?
- Is there anywhere a good guideline for tuning the volblocksize of ZFS on Proxmox?
- Can i change the blocksize of a ext4 filesystem to 64k or is that considered unstable?
- Is the parity/padding/space waste on Mirrors nonexistant?
- On setup 1) I have a (Windows) VM that reports 5.41T used, which should be around 4.92TiB, but ZFS reports 5.39TiB. So almost half a TiB more than it should. Discard is enabled. Is it because there are 3 different blocksizes involved?
- On setup 2) there is a (Windows) VM, where when I write ~5GiB (sequentially) zfs reports writes of around ~15GiB so a writeamplification of 3. Eventhough there is no report of writeamplification problems on mirrors. (Atleast not that i could find.) Is it again because of the 3 different blocksizes?
Cheers
Last edited: