ZFS Storage for VMs and Proxmox vs TrueNAS Management

dizzydre21

Member
Apr 10, 2023
78
2
13
Hi folks,

I'm going to be building a new Epyc based server this week, and I have some questions concerning ZFS pools under Proxmox.

Previously, I've done most ZFS related things with TrueNAS, first virtualized, and now bare-metal. I have two TrueNAS machines that handle all typical NAS duties these days.

What I'm curious about is creating a ZFS pool for use by the VMs on the same machine which the VMs reside on. This would be for the VMs and for a few ISCSI shares to other machines. I would like this pool to be as performance as possible, using stripped mirrors and a special VDEV for files under 64Kb. It will have a 25gb connection to the other machines.

Since I am familiar with TrueNAS and still learning ZFS from the CLI, would it make sense to pass through all drives to a TrueNAS VM for the pool creation? I would then export and then import it into Proxmox.

If I just used a TrueNAS VM to manage the pool instead of Proxmox, what performance hit would the VMs have? I do not have a 25gb switch, but have the devices that need that bandwidth directly connected via SFP28 DAC cables. I don't know how that would work when connecting the ZFS storage to my VMs, since they'll be on the same machine.
 
What you describe is a storage appliance setup, which was common in the old VMware days. This can be done in PVE, but only for VMs. You can - TrueNAS integration required - be done with ZFS-over-iSCSI, which does all the management stuff for you, if possible. I've only used this with Debian Linux a couple of times and never with FreeBSD-based systems. Your network will not be used unless you have TrueNAS on the same machine as your VMs rum, only virtualized network devices will then be used.
 
  • Like
Reactions: Johannes S
What you describe is a storage appliance setup, which was common in the old VMware days. This can be done in PVE, but only for VMs. You can - TrueNAS integration required - be done with ZFS-over-iSCSI, which does all the management stuff for you, if possible. I've only used this with Debian Linux a couple of times and never with FreeBSD-based systems. Your network will not be used unless you have TrueNAS on the same machine as your VMs rum, only virtualized network devices will then be used.
Let me make sure I understand. Also, I'm on TrueNAS Scale, which is Debian based.

If I have a TrueNAS VM running under Proxmox and use it to manage the ZFS pool, I will or will not be utilizing the network if my VMs are stored on the pool? I would have ISCSI shares set up for the other host machines, but I know that would for sure utilize the network.

Basically, I'm just wondering how to give all of the local VMs direct access to the ZFS pool. I've read that with paravirtualized NICs, you basically get Inter-VM bandwidth that is much higher than the NIC can actually achieve. How would I do this if TrueNAS was virtualized? I was planning to pass the NIC through to the VMs, but it sounds like virtualizing them would be better for bandwidth/throughput.
 
TrueNAS makes its ZFS storage available for VM storage via network protocols such as NFS, iSCSI etc whether virtualised or not. If TrueNAS runs in a VM with paravirtualized network devices, it actually uses the network stack of the host, and will communicate with other VMs on the same host using Linux bridges and basically the equivalent of the VMs calling each other via localhost, with some overhead. In other words very fast - low latency and throughput in 20-30 gigabit or more depending on memory bandwidth and CPU. All while TrueNAS will not be able to tell the difference other than by which network driver is being used by the kernel.

However think about whether you need TrueNAS at all in this instance. If you let Proxmox run the ZFS pool for its VM storage then you skip the network stack and protocols altogether, so even more efficient. TrueNAS provides a nice GUI for disk management (swapping out broken disks etc) and for managing network shares and permissions, but it adds complexity and overhead.
 
Last edited:
  • Like
Reactions: Johannes S and UdoB
TrueNAS makes its ZFS storage available for VM storage via network protocols such as NFS, iSCSI etc whether virtualised or not. If TrueNAS runs in a VM with paravirtualized network devices, it actually uses the network stack of the host, and will communicate with other VMs on the same host using Linux bridges and basically the equivalent of VMs calling each other via localhost, with a little overhead. All while TrueNAS will not be able to tell the difference other than by which network driver is being used by the kernel.

However think about whether you need TrueNAS at all in this instance. If you let Proxmox run the ZFS pool for its VM storage then you skip the network stack and protocols altogether, so even more efficient. TrueNAS provides a nice GUI for disk management (swapping out broken disks etc) and for managing network shares and permissions, but it adds complexity and overhead.
Thanks for the clarification.

Okay, so say I'm doing all of the ZFS management within Proxmox. How do you create an ISCSI share for the remote client machines that will need access to the pool? The Proxmox docs that I see are referring to adding a share FROM the remote machine, not sharing it TO that machine.
 
Right, Proxmox itself doesn't provide a way to export iSCSI over the network. You could set it up relatively easily yourself, as the underlying Debian has built-in support for this (via targetcli), but you'll be bastardising your Promox install somewhat which is not to everyone's taste. TrueNAS would provide a GUI for you.
 
Right, Proxmox itself doesn't provide a way to export iSCSI over the network. You could set it up relatively easily yourself, as the underlying Debian has built-in support for this (via targetcli), but you'll be bastardising your Promox install somewhat which is not to everyone's taste. TrueNAS would provide a GUI for you.
That was one of my worries. I'm just doing all of this in my homelab, but I've really gotten into the mindset of separating NAS, hypervisor, and networking duties. Sometimes there can be a lot of overlap, though.

This particular zpool may end up only providing ISCSI shares to my gaming machines and then I'll just keep VM storage on the local NVMe drives. There are too damn many ways to set all this up. Just about the time I'm happy and satisfied, I'll come up with some other method that breaks everything for a few days.
 
As you say, you could do both. Flash and ZFS on Proxmox for ultra-fast and resilient local VM storage. Then a NAS VM which provides slow(er) storage over the network, this could run TrueNAS if you like. Preferably with its own passed-through hardware (SAS/SATA controller or NVMe).

And then look at Ceph…
 
  • Like
Reactions: UdoB
As you say, you could do both. Flash and ZFS on Proxmox for ultra-fast and resilient local VM storage. Then a NAS VM which provides slow(er) storage over the network, this could run TrueNAS if you like. Preferably with its own passed-through hardware (SAS/SATA controller or NVMe).

And then look at Ceph…
I am one Lenovo Tiny PC away from playing around with clusters and Ceph storage. It looks super interesting, but I doubt I'll ever use it on my main system. I stick to the KISS method in my homelab for services that (mostly) need to stay up 24/7. I know that is a totally contradictory statement considering HA and all that stuff.
 
Ceph is amazing but it needs at least 3 preferably 4 nodes and 10 or 25Gb networking. One typical homelab pattern would be to run all your nodes in a Proxmox/Ceph cluster and let that also be your VM storage, then layer one or several NAS VMs on top with its own attached additional storage to expose over the network, e.g. a bunch of spinning disks for media and backups etc.
 
Last edited:
I'm just doing all of this in my homelab, but I've really gotten into the mindset of separating NAS, hypervisor, and networking duties. Sometimes there can be a lot of overlap, though.
KISS is always your friend. I would try to minimize external dependencies, so running everything on ONE machine.
 
  • Like
Reactions: dizzydre21
Yes but less fun... ;-) Also makes Proxmox/os updates scarier, and more vulnerable to hardware problems with all eggs in one basket.
 
Yes but less fun... ;-) Also makes Proxmox/os updates scarier, and more vulnerable to hardware problems with all eggs in one basket.
That analogy does not fly with your setup. If you have two systems and you need both to work properly to have one PVE box, is has double the failure odds, because each can fail individually.
 
  • Like
Reactions: Johannes S
Ah. Well. A 2 node cluster can work fine too, add a raspberry pi qdev for cluster quorum even with 1 node down. Ceph is obviously not an option then, so either VM storage on NAS with single point of failure or local storage with replication across nodes.
 
Last edited:
That analogy does not fly with your setup. If you have two systems and you need both to work properly to have one PVE box, is has double the failure odds, because each can fail individually.
I agree with you on this.

I'm afraid of something breaking when using multiple nodes and all my services, albeit non-critical home lab stuff, going down and my wife hating me for a week while I spend every waking hour trying to fix it.

That said, I am super excited to start playing with Ceph and HA once I can find another deal on a Lenovo tiny PC. I have a buddy that's going to 3d print me some brackets to hold a tiny blower inside so that I can run either 10gb or 25gb dual port NICs in each machine. I will probably play around with a few of my normal services running on an HA cluster, but I highly doubt I'll leave it running that way. Another thing that would hold me back is that lack of migration for machines using PCIe passthrough. I have VMs that need a PCIe coral device and an iGPU, plus a USB based Zigbee coordinator. The Zigbee coordinator could be replaced by an ethernet based one, but the other devices would prevent me from having a true HA setup, no?
 
For playing around with Ceph etc you could just spin up three VMs and install ProxmoxVE with Ceph on them.

Regarding HA: I would use the third mini PC as a combined ProxmoxBackupServer/qdevice node. Alternatively you could setup a VM on your TrueNAS server as a qdevice (which might also be used as a ProxmoxBackupServer) or a combination of them all ;)
 
  • Like
Reactions: UdoB
Pretty cool! But given TrueNAS history of disregard for forward and backward compatibility, breaking changes and lack of consistency across versions, I wonder if this is more trouble than it’s worth? Given that it would likely stop working again soon or at best a heavy maintenance burden to keep it up-to-date with TrueNAS.
 
  • Like
Reactions: Johannes S
As long as the API settles down it should remain pretty stable moving forward. The original plugin was more difficult to maintain for sure.
 
  • Like
Reactions: uzumo