Access ZFS Datasets on Storage Pool from VMs

norsemangrey

Member
Feb 8, 2021
63
9
13
39
Hi experts,

I have read a bunch of post on this topic, but have still to find a clear answer to this. My intended setup is as depicted below. I need a proper way to access the ZFS datasets on the HDD storage pool from the VMs. What is the best way to accomplish this?


1619159846669.png
 
Either setup an NFS/iSCSI share or pass-through the HBA (if you use any), or the disks to the VM.

Thanks for the reply @ph0x

As you can see from the drawing I am planning to to have multiple VM's which should be able to access the datasets so passing an HBA through to one of them is not an option (in either case all the HDDs are connected to the MB).

I have seen several mention of setting up an NFS share previously (or iSCSI, what is the difference?), but never any details on how to actually set this up or whether or not it is the "best practise" approach.
 
  • Like
Reactions: Panzer1119
NFS is a file based sharing, while iSCSI is usually block oriented (and a bit more complicated to set up :) ).
iSCSI targets behave like local disks, so you would need one for each VM and backing up would be the task of the VM mainly, since the host only sees blocks. Therefore I would go for NFS, which can be backed up on host level.
There are a plethora of tutotials on how to share files via NFS. Usually you define the shares in /etc/exports. ZFS is also able to share via NFS natively, but I haven't tried that myself yet. Here you'll get some info about that: https://blog.programster.org/sharing-zfs-datasets-via-nfs

Be careful when you try to share the same files through more than one mean, because that's very prone to data loss due to incompatible locking mechanisms.

Additionally, a separate VLAN for NFS is recommended.
 
  • Like
Reactions: norsemangrey
NFS is a file based sharing, while iSCSI is usually block oriented (and a bit more complicated to set up :) ).
iSCSI targets behave like local disks, so you would need one for each VM and backing up would be the task of the VM mainly, since the host only sees blocks. Therefore I would go for NFS, which can be backed up on host level.

Thanks for clarifying. I think NFS is the way to go for me!

There are a plethora of tutotials on how to share files via NFS. Usually you define the shares in /etc/exports. ZFS is also able to share via NFS natively, but I haven't tried that myself yet. Here you'll get some info about that: https://blog.programster.org/sharing-zfs-datasets-via-nfs

Yes, I've seen many tutorials, but not being very familiar with either NFS , ZFS or Proxmox I was not certain how they related to Proxmox and the best practises there. Thanks for the link! I will check it out.

Be careful when you try to share the same files through more than one mean, because that's very prone to data loss due to incompatible locking mechanisms.

Do you mean for instance if I share a dataset through NFS while at the same time bind mounting it to an LXC container. How should I handle a case like that (that situation might actually be applicable to me).

Additionally, a separate VLAN for NFS is recommended.

Is this an issue as long as the NFS sharing is only happening internally on the server? I.e. between the local HDD ZFS pool and the VM's?


Edit: I have read some post where people are using an LXC container to share datasets with NFS and that this is not recommended because if the NFS server kernel crashes the Proxmox host might also crash. How is this different from running the NFS server directly on the host?
 
Last edited:
Do you mean for instance if I share a dataset through NFS while at the same time bind mounting it to an LXC container. How should I handle a case like that (that situation might actually be applicable to me).
You should probably use separate datasets or folders within it, then. Or have the LXC container also use NFS.

Is this an issue as long as the NFS sharing is only happening internally on the server? I.e. between the local HDD ZFS pool and the VM's?
No, you're right, then all traffic is handled locally.

Edit: I have read some post where people are using an LXC container to share datasets with NFS and that this is not recommended because if the NFS server kernel crashes the Proxmox host might also crash. How is this different from running the NFS server directly on the host?
I don't see a difference as well. Although I haven't experienced a kernel crash because of NFS so far. Then again, I don't have a ton of experience with NFS, so this might indeed be valid.
 
  • Like
Reactions: norsemangrey
Or have the LXC container also use NFS.
Only privileged LXCs can mount NFS shares. For unprivileged LXCs bind-mounting is the only option.
What I do to bring network shares into a unprivileged LXC is to mount the share on the host itself and then bind-mount that mounted share into the LXC.
Maybe that is an option too if it would be possible to mount a share on the same host that is acting as a server.
 
Last edited:
Thanks for the insight, although I fear that this is not an option here, since the Proxmox host itself would be the sharing server.
There are problably still other ways to satisfy the need.
 
What i like to do is install SSHFS, I need my plex VM to access the media files stored on my HDD pool, much like your setup.
Prior to that I had plex running in a docker container on my proxmox host, that solves the issue with volume mounts. But I needed HW transcoding which was a pain to setup via docker.
 
What i like to do is install SSHFS, I need my plex VM to access the media files stored on my HDD pool, much like your setup.
Prior to that I had plex running in a docker container on my proxmox host, that solves the issue with volume mounts. But I needed HW transcoding which was a pain to setup via docker.
Interesting. What is the advantage with SSHFS vs NFS? Also could you not run Plex in an LXC?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!