ZFS pool usage limits vs. available storage for dedicated LANcache VM

Gamienator

Active Member
Mar 16, 2021
50
7
28
34
Hi everyone,

I’m currently migrating to a new Proxmox hypervisor and could use some advice regarding storage layout and supported options.

My old Proxmox host had 7 × 960 GB SATA SSDs, hosting only a single VM: LANcache, which is a local caching server that stores frequently downloaded game and software content to reduce external bandwidth usage.

The new hypervisor has a different storage setup. It includes 2 × NVMe SSDs, which will host all other VMs and services, and 4 × SATA SSDs, which are dedicated exclusively to the LANcache VM. No other workloads will use the SATA-based storage.

If I follow the common recommendation to use a maximum of ~85% of a ZFS pool, and then place the VM disk on top of that, the LANcache VM ends up with only ~2 TiB of usable net storage, which feels like a significant loss compared to the raw capacity—especially given that this pool is dedicated to a single, non-critical cache workload.


I would also like to mention that I am intentionally avoiding fake RAID solutions, such as motherboard RAID or mdadm, and would prefer to stay within supported Proxmox storage technologies.

Therefore, I would like to ask:
  • Would it be reasonable (and safe) to increase the VM disk size to ~95% of the ZFS pool, considering that this ZFS pool is used by only one VM and stores cache data only?
  • Are there other supported ways to add or better utilize storage in this scenario? For example, I did not find an option in the GUI to create a ZFS stripe.
  • Is it possible to configure LVM striping or LVM parity via the Proxmox GUI? Since this VM only stores cache data, advanced ZFS features (snapshots, compression, etc.) are not critical for this use case.
Any guidance, best practices, or recommendations for such a setup would be greatly appreciated.

Thanks in advance!
 
And for faster access to the ZFS Metadata please use SSD ZFS Special Device n * drive zfs mirror; n>1.
 
He is already using ssds for it's pool.
You can add mirrors to mirrors ,effectively getting raid10.
it depends if he needs capacity or iops.
for capacity with 4 drives, raidz1is the better choice as it offers 75% of the gross capacity with 4 drives. it is limited in iops though, which shouldnt be a big issue if its just for lancache.
for iops i would go striped mirrors (equivalent to raid10) as well. for my small shared storage for HA i run a setup with 3 vdevs made of 1 mirror each giving me nice iops, but only 50% capacity.
 
Capacity is here the mainfoxus tbh. The most saturation was 2 Gbit/s. Installed is a 10 Gbit/s NIC. And since its sequencial cache files there aren't that much of small files IIRC
 
then raidz1 will give you the most capacity, while still allowing for 1 drive to die on you.

edit: if the data is expendable you could go full stripe mode (essentially raid 0) with zero redundancy, but any burp in the system will require you to rebuild the whole pool and redownload everything. it will give you 100% capacity and at the same time the highest iops though.
 
Last edited:
Exactly. I remember that the usable storage on ZFS is a bit lower then LVM, because of metadata of ZFS, of am I wrong?
 
for that my knowledge of zfs is not deep enough. we do have other people on the forums who can probably go into more details on the internas of zfs here.
i never had to deal with it because im never tried squeezing the last bits out of the capacity.
i never exceed 90% on my pools.