Proxmox VE created ZFS pool, passed through into VM online on both host and guest

sush

New Member
Nov 21, 2022
3
1
3
London, UK
My current NAS setup has Proxmox VE running on bare metal and TrueNAS Scale running in a VM.
  1. I created a zfs pool "appspool" from the UI: Datacenter -> Storage -> Add -> ZFS
  2. I then created a TrueNAS scale VM and passed through the disk
    Bash:
    qm set 900 --scsi2 /dev/disk/by-id/nvme-WDC_WDS250G2B0C-00PXH0_2108AG451111
  3. After installing TrueNAS Scale, I imported the zfs pool from UI: Storage -> Import -> Select "appspool"
After this, if I go into shell in either Proxmox VE or TrueNAS Scale and type zpool list, I can see that it is online on both.

My question is whether this is right way to set it up or should I completely remove it from the host?
(I tried this on a test pool by doing a zpool export testpool and it said that the pool is corrupted and had to recreate the pool in TrueNAS losing all data on it)

If having the pool online on both host and guest is not a good idea, I was wondering if there is a clean way to disconnect it from host and pass it through into guest without additional configuration?

Thanks in advance,
Sush
 
  • Like
Reactions: Panzer1119
You can't mount block devices on multiple OSs. Doing that corrupts the data. Either use the ZFS pool on PVE or the TrueNAS VM and then use SMB/NFS/iSCSI shares to give the other one access to the files/folders/blockdevices on that pool. Creating that ZFS pool inside the TrueNAS VM would be easier, as PVE got no NAS functionalities, so you would have to install and setup a SMB/NFS server using the CLI on your own if you would want PVE to provide NFS/SMB shares. Out of the box PVE can only act as a SMB/NFS client.

And you got a NVMe, so I would use PCI passthrough to bring it in the TrueNAS VM and not disk passthrough. That way TrueNAS can work with the real SSD and not a virtual disk and got direct access to the real hardware. So no virtualization and no additional overhead and less disk wear. better performance and TrueNAS could monitor SMART.

And keep an eye on the SSD wear. ZFS got massive overhead and can kill consumer SSDs in no time when hitting it by alot specific workloads (lke a DB doing a lot of small sync writes). Your SSD only got 150TB TBW and isn't really recommended to be used with ZFS. Something like a Micron 7450 PRO (MTFDKBA480TFR-1BC1ZABYY) or Seagate Nytro 5000 (XP400HE30002) would have been a better choice.
 
Last edited:
I have successfully managed to do a PCI passthrough on the NVMe. It has now disappeared from the host and only visible on the guest (TrueNAS). I tried exporting the zfs pool using
Code:
zpool export appspool
but after this the data on the disk was lost. I checked this on both host and guest. After doing the passthrough, I wiped it and created a zfs pool inside TrueNAS and it works well now.

I also have a couple of hard drives which are connected to the onboard SATA controller. I only have one PCIe slot which is already used by a 2.5G network card. So using an HBA is out of the question. I could use an external USB HDD dock and pass it through, but I don't think this is the recommended approach?

My current plan with the HDDs is to use the regular disk passthrough. I plan to use virtio-blk rather than virtio-scsi (compared here) for better throughput and I don't intend on scaling this PVE node beyond a few more HDDs. Not sure if this would degrade their performance too much, the disks are enterprise grade - Seagate EXOS and Toshiba MG Series.

And regarding the SSD wear, I don't plan on running anything storage intensive on this. The main purpose will be to use it as a home NAS for all my device. Apart from that - run Jellyfin, PhotoPrism, NextCloud, Nginx, Tailscale. I only expect to write about 1 TB or less per year to this SSD.

And thanks again for your quick response and insightful information.
 
I also have a couple of hard drives which are connected to the onboard SATA controller. I only have one PCIe slot which is already used by a 2.5G network card. So using an HBA is out of the question. I could use an external USB HDD dock and pass it through, but I don't think this is the recommended approach?

My current plan with the HDDs is to use the regular disk passthrough. I plan to use virtio-blk rather than virtio-scsi (compared here) for better throughput and I don't intend on scaling this PVE node beyond a few more HDDs. Not sure if this would degrade their performance too much, the disks are enterprise grade - Seagate EXOS and Toshiba MG Series.
USB passthrough isn't great for HDDs and even when using a USB dock it would be recommended to passthrough those USB HDDs using disk passthrough.
And regarding the SSD wear, I don't plan on running anything storage intensive on this. The main purpose will be to use it as a home NAS for all my device. Apart from that - run Jellyfin, PhotoPrism, NextCloud, Nginx, Tailscale. I only expect to write about 1 TB or less per year to this SSD.
Nearly 1TB is my home server with Emby, Nextcloud, Nginx and so on writing per day while idleing ... dont underestimate it, especially when using ZFS.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!