non ZFS VM storage

cyruspy

Renowned Member
Jul 2, 2013
67
2
73
Hello!,

I've used ZFS data stores in the past with plain SAS HBAs before and have been pretty happy with it, handling protection & compression.

For a given setup, I have:
- OS boot: LSI SAS HBA + 1 x KINGSTON SA400S3 + 1 x SanDisk SD5SB2-1 + ZFS Mirror.
- Datastore storage: RAID controller (Intel RS3DC080 + BBU) + 4 SSD disks ( 2 x CT2000MX500SSD1 + 2 x WDS200T1R0A-68A4W0) to be used as lab machine to run k8s tests and nested virtualization (ESXi on top of KVM).

Given the setup, I won't use ZFS for VM storage and I'm struggling to chose from:
- Plain XFS
- LVM
- LVM-thin

Can anybody comment from experience in regards to:
- Performance
- Space optimization (I'm used to ZFS compression)
- Functionality (clones, snapshots, etc)
 
LVM is missing thin provisioning and snapshotting. XFS only got snapshotting and thin provisiong on file level when using qcow2 (which is also Copy-on-write like ZFS). LVM-thin got thin provisioning and snapshotting built-in on block level.
 
LVM is missing thin provisioning and snapshotting. XFS only got snapshotting and thin provisiong on file level when using qcow2 (which is also Copy-on-write like ZFS). LVM-thin got thin provisioning and snapshotting built-in on block level.
Checking the documentation, it seems I won't have compression without ZFS. Thin provisioning should be about the same with LVM-Thin vs XFS+QCOW2, any insights on performance while using snapshots on LVM vs QCOW2?
 
ZFS compression will usually not help much when using VMs. VMs use zvols and these got a a fixed blocksize, called volblocksize. You usually want the volblocksize as small as possible to lower the read/write amplification of databases and so on.

Lets for example use the default ashift=12 and volblocksize=8K. So ZFS can only write full 4K blocks to the disks and the virtual disks can only work with 8K blocks.

You got a 8K block that is 25% compressible. Data would be compressed down to 6K but ZFS can only write 4K blocks to the disks. 6K won't fit in a 4K sector so it has to use 2x 4K sectors and the 6K of data still consumes 8K. so the compression isn't saving any space. To save some space that 8K block must have to be at least 50% compressible, so the data can be compressed from 8K to below 4K so it fits into a single 4K sector.

So to really make use of ZFSs block level compression you need to use a volblocksize that is a higher multiple of your sector size. So for example a 64K volblocksize with a ashift of 12. But this then also comes with its downsides like terrible overhead (= horrible performance loss and increased SSD wear) whenever you sync write a block that is below 64K in size.

The block level compression is more useful when using LXCs, where datasets are used which use a dynamic recordsize instead of a fixed volblocksize.
 
Last edited:
Thanks for the feedback. I'm going with XFS+QCOW2 on top of a 2+2 RAID10 virtual disk.
 
disclaimer- I have no tried either of these. @Dunuin maybe you have- would btrfs be a viable option?

a trick both btrfs and xfs can do is out of band deduplication, which can increase storage efficiency- would it help vm disks (either btrfs block or xfs qcows?)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!