Setting up ZFS fileserver on a VM

N0REAVER

Member
Aug 10, 2021
8
0
6
41
Hi.
I have a pve server (in fact 2+ identical ones) with 2x hardware mirror volumes on ssd and also a pcie sata expander that i can pass through to the vm which connects to a 4xHDD box.
I was thinking about setting up (did it in fact) a VM to act as a file server with ldap and krb5 and zfs "raid 10" using those 4 hdds.
I allocated 2 virtual disks for the vm from those 2 ssd raid volumes.
Question: the intention was to use one of those virtual disks set as a slog dev for the zfs pool. Since is on a hw raid mirror, should be safe with 1 vm disk.
Is this setup a good idea?
ssd have 512b and hdd 4k sector size.

i have 4 identical 1u servers with ecc ram, but only 2 hdd das boxes. the idea was that in case something happens with one server, i can migrate the fileserver vm and attach the das to another server.
 
Last edited:
Question: the intention was to use one of those virtual disks set as a slog dev for the zfs pool. Since is on a hw raid mirror, should be safe with 1 vm disk.
Is this setup a good idea?
Bad idea. ZFS wants direct access to disks and no virtual disks. Also performance is probably poor. SLOG is only read in case of a system halt, so even a single SSD is pretty safe.

IMHO:
Promox is a great hypervisor, and TrueNAS is a great ZFS NAS.
Besides power consumption, mixing them together is always a bad idea, since they have different hardware needs and it makes everything extremely complicated.
 
Bad idea. ZFS wants direct access to disks and no virtual disks. Also performance is probably poor. SLOG is only read in case of a system halt, so even a single SSD is pretty safe.
Well, that's what i mentioned, i passed-through the sata ports/the sata pcie expander (which is on a separate iommu group) so the vm does see/access directly the hdd.

SLOG is there to write stuff faster in the first place so the zfs can reply quicker to sync. otherwise the slog would be on the hdds themselves so more operations/time to do that, and then read indeed if a system halt happens and data wasn't committed.

The plan was to use TrueNAS Scale, and i tested that but there are some issues with true nas regarding permissions etc because it requires a full access (a bindDN) to ldap server i am using for authentication (which i won't get). Normally i don't need that if i set up my own debian system with sssd, ldap and krb5 just for authenticating against the ldap server.
 
Well, that's what i mentioned, i passed-through the sata ports/the sata pcie expander (which is on a separate iommu group) so the vm does see/access directly the hdd.
Yeah, but the SLOG is a virtual disk right?

And you haven't described your use case, SLOG only accelerates sync writes. Does that apply to you?
 
Last edited:
Yeah, but the SLOG is a virtual disk right?

And you haven't described your use case, SLOG only accelerates sync writes. Does that apply to you?
Yes, is virtual
Mainly user home folders shared over nfs.
Synchronous writes are desired for consistency-critical applications such as databases and some network protocols such as NFS but come at the cost of slower write performance.
[...]
Because each disk can only perform one operation at a time, the performance penalty of this duplicated effort can be alleviated by sending the ZIL writes to a separate ZFS intent log or SLOG, or simply log. While using a spinning hard disk as SLOG yields performance benefits by reducing the duplicate writes to the same disks, it is a poor use of a hard drive given the small size but high frequency of the incoming data.The optimal SLOG device is a small, flash-based device such an SSD or NVMe card, thanks to their inherent high-performance, low latency and of course persistence in case of power loss.
 
Normally i don't need that if i set up my own debian system with sssd, ldap and krb5 just for authenticating against the ldap server.
So TrueNAS actually doesn't work, the VM and passed-through HBA makes setup more complex and you mostl yonly need NFS, which ZFS is able to support natively already. Are you sure to need a more complex setup instead of publishing NFS shares directly at the Proxmox host?
 
  • Like
Reactions: IsThisThingOn

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!