Using ZFS pool and dataset on my Proxmox with OMV

Hello,

I've a proxmox server hosting OMV in a VM.

For simplicity, performance and convenience, I want my data hosted on my Proxmox ZFS.

I want to use OMV as a samba server manager (and other things). And to be able to access the same datas from other VM/LXC (NextCloud, Backup, Plex LXC etc.)

I have now took hours and hours read A LOT of posts about accessing a ZFS pool/dataset on Proxmox. There is no complete solution or tutorial that explain a fast and easy solution.

I looks like the easiest, fastest way I found is this one, It seems neat and makes sense... but it's not a very detailed explanation for beginners and with some typos "briding ports" ?!?o_O:
  • Created some ZFS datasets on Proxmox, and configured a network bridge (without briding ports - so like a "virtual network", in my case 192.168.1.0/28) between Proxmox and OMV (with VirtIO NIC).
  • Then I created some NFS Shares on Proxmox and connected to them via RemoteMount Plugin in OMV. Speed is like native (the VirtIO Interface did incredible 35Gbs when I tested it with some iperf Benchmarks), and now I dont need and passthrough. Works like a charme for me
It seems a good solution, but I have no clue how to "bridge" Proxmox and OMV.

It's recurrent problematic that I've read on a lot of forums, Reddit etc. and I think it's the nicest way to do it for "home" server with limited ressources and power.

So, if an "expert" could explain how he would set this up, I would be very happy:D (and probably many others) ;)

Thanks a lot !
hello, i am also lazy, so understand the request. I tried multiple variants and stand finally with pure Proxmox + virtual machine serving ZVOLS and NFS. I masked HBA in proxmox and whole propadate to VM. The advantage is the speed is very good and no necesity to experiment in PVE which in my case always soon or later leads to bad. It is good make script looping (for example with pings) in /etc/init.d/rc-mount + systemd unit, which mount what desired after VM boot automarticaly and have all PVE Storages in place. In my case it's 0000:00:17.0 address in lspci. and do it directly in /etc/kernel/cmd with command vfio-pci.ids=8086:a352. Only when it does not work and Proxmox see it is during zfsbootmenu early stage in dracut minicernel because it mount it. Guess it's zfsbootmenu benevolence approch and probably has solution. After that when PVE is up it's surprisingly solid and stable even read opinions on the internet should not be used in prod. My experience is other.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!