Using ZFS pool and dataset on my Proxmox with OMV

sebcbien

Member
Jun 7, 2020
44
9
13
53
Hello,

I've a proxmox server hosting OMV in a VM.

For simplicity, performance and convenience, I want my data hosted on my Proxmox ZFS.

I want to use OMV as a samba server manager (and other things). And to be able to access the same datas from other VM/LXC (NextCloud, Backup, Plex LXC etc.)

I have now took hours and hours read A LOT of posts about accessing a ZFS pool/dataset on Proxmox. There is no complete solution or tutorial that explain a fast and easy solution.

I looks like the easiest, fastest way I found is this one, It seems neat and makes sense... but it's not a very detailed explanation for beginners and with some typos "briding ports" ?!?o_O:
  • Created some ZFS datasets on Proxmox, and configured a network bridge (without briding ports - so like a "virtual network", in my case 192.168.1.0/28) between Proxmox and OMV (with VirtIO NIC).
  • Then I created some NFS Shares on Proxmox and connected to them via RemoteMount Plugin in OMV. Speed is like native (the VirtIO Interface did incredible 35Gbs when I tested it with some iperf Benchmarks), and now I dont need and passthrough. Works like a charme for me
It seems a good solution, but I have no clue how to "bridge" Proxmox and OMV.

It's recurrent problematic that I've read on a lot of forums, Reddit etc. and I think it's the nicest way to do it for "home" server with limited ressources and power.

So, if an "expert" could explain how he would set this up, I would be very happy:D (and probably many others) ;)

Thanks a lot !
 
Last edited:
The easiest way to go about this is probably to just use ZFS on the PVE host and put your OVM VM's disk on there. Then have your other CTs/VMs mount the OVM shares via SMB/NFS/SSH/whatever.

If you really want to go the way you describe, "bridging" in this context simply refers to creating a second network bridge (i.e. vmbr1, can easily be done via the GUI under '<node> -> Network') and not specifying a "bridge port", i.e. the bridge (which you can think of as sort of a virtual switch) will not be connected to a physical port and instead just connect together any VMs that have a (secondary) NIC attached to it (that is, "bridging" them together). The PVE host will always be connected to a vmbr, as long as an IP is specified for it (can be done in the GUI as well, when creating the bridge).

I personally would recommend just using the default bridge, makes configuration easier if all your CTs/VMs are supposed to be able to access the internet anyway.
 
Thanks @Stefan_R !
Understood very well for the "bridging", perfect, I will follow your recommendation !

Just one more precision, about the "put OMV VM's Disks on there" that means:
- Creating LVM disks on the ZFS of the PVE host ?
- Bind full (ZFS) disk to OMV ?

With both solution, that means that the Proxmox file system cannot access the files "directly". Right ?
I'm then "binded" to OMV and could not replace it with another management web interface later without migrating datas from OMV to the new solution. In my mind, having my data hosted by Proxmox allow me to easily test/use/migrate/update services like OMV5,OMV6,FreeNAS,UnRaid,Owncloud,NextCloud whatever...

Or maybe install Samba on my Proxmox host, even if it's not recommended to "pollute" the Proxmox install, It gives me a BASIS and everything access it trough containers, LXC, VM's etc.
 
Just one more precision, about the "put OMV VM's Disks on there" that means:
- Creating LVM disks on the ZFS of the PVE host ?
- Bind full (ZFS) disk to OMV ?
Assuming your PVE has already set up a ZFS pool, what I mean is simply create the VM and select your ZFS pool as backend. That will create a ZFS subvol that your VM will then use.

To seperate your data management from your actual data, you can simply create a seperate disk for your data. In the "Hardware" tab of your VM on the web GUI, you can select "Add"->"Hard disk" and, once again selecting your ZFS pool as backend, create a second (probably larger) disk. If you decide to try another solution, simple reinstall the OS within your VM and don't touch the data disk, it will be available as an "external" drive after your VM is installed with all the data from the previous install still there.
 
  • Like
Reactions: sebcbien
GREAT ! :cool:
My main concern in fact is that it's always an pain in the B*** to migrate huge volume of data.
I just have tested your suggestion:
Created a separate disk in my ZFS pool: (tank01/vm-100-disk....)
Code:
root@pve:~# zfs list
NAME                  USED  AVAIL     REFER  MOUNTPOINT
tank0                33.0G  77.4G       96K  /tank0
tank0/share            96K  77.4G       96K  /tank0/share
tank0/vm-100-disk-0  33.0G   110G     2.52M  -
Then when I go to my OMV, I see a new disk.
1592483848951.png
This disk is better to be formatted in ext4 I guess ?, Should I format as ZFS here ? (I know that I can install ZFS support on OMV, but it will consume resources and the next OS to use those datas will need ZFS support)

You said also this:
If you decide to try another solution, simple reinstall the OS within your VM and don't touch the data disk, it will be available as an "external" drive after your VM is installed with all the data from the previous install still there.
That means a "downtime" and if I want to rollback I have to restore the whole VM, system and datas.

Will it be possible to install later, let's say OMV V6 or Unraid in a new VM and then, when everything is ok, "attach" the OMV V5 data disk to this new VM ?

Sorry to bother you :rolleyes:
 
This disk is better to be formatted in ext4 I guess ?, Should I format as ZFS here ? (I know that I can install ZFS support on OMV, but it will consume resources and the next OS to use those datas will need ZFS support)
Nested ZFS is not recommended, if you want ZFS use it on the host, but in the guest using something light like ext4 is recommended.

That means a "downtime" and if I want to rollback I have to restore the whole VM, system and datas.

Will it be possible to install later, let's say OMV V6 or Unraid in a new VM and then, when everything is ok, "attach" the OMV V5 data disk to this new VM ?
You can, with minimal downtime, however a few manual steps are involved then.

First you'd create a new VM without any "data" disks (just root) and install your new OS. Then, do the following:
  • On your old VM, "detach" the hard disk. Do not delete it, just detach!
  • Rename your disk from vm-XXX-disk-1 to vm-YYY-disk-1 (where XXX is your old VMID and YYY your new one, disk-1 since disk-0 should be the root of the old VM, while disk-1 is the seperate data disk) - e.g. 'zfs rename <old> <new>'
  • Run 'qm rescan' from the command line
  • Your disk should now show up in the hardware tab of the new VM
  • Double click the disk and attach it via a bus of your choice (SCSI, SATA, VirtIO, etc...)
  • Start your new VM and configure the data disk
 
  • Like
Reactions: sebcbien
Can't you bind mount the local pool into vm?
I know I do it for my lxc containers.
I have all my data pools on host.

Proxmox does not manage them. And I just bind mount the pool to my lxc.
I run emby and jdownloader like that.
 
Can't you bind mount the local pool into vm?
I know I do it for my lxc containers.
No, you cant bind mount into VMs. The VM is isolated completely from the host kernel, thus the only options are emulating a disk device (which usually implies exclusive access) or mounting via the network, which is definitely possible, but unnecessary in this case IMHO.
 
or mounting via the network, which is definitely possible, but unnecessary in this case IMHO.
I've done it that way:
- Installed NFS on Proxmox
- Created a share on a ZFS dataset
- Mounted with remote mount plugin in OMV
- Was VERY FAST (saturate my Gigabit Ethernet)
- Faced many problems:
  • Setting ACLs (Proxmox Side), I was only able to do basic security with privileges. With ACLs, some things worked, but others not, and always error messages when applying. Not very clean ...
  • NFS is very poor about security (can only allow some ip to access files, it's all or nothing)
  • I've read that NFS is limited in number of shares.
  • Not easy, for each share, with "security limitations" of NFS, I had to create an NFS share, then mount on OMV, then share on OMV then create a Samba Share...
So... I will give a try to the attach/detach of ZFS disks. :)
 
One other nice solution would be to have samba or a "samba manager" in a LXC.
This will keep Proxmox clean, the LXC can be backuped, cloned/moved, and the datas would be stored directly on the Proxmox ZFS
That LXC would share (Samba) the datas stored on the ZFS of Proxmox and bind mounted on the LXC.
I tried already that, but even with Turnkey File Server it didn't go well, lot of strange behaviors.
If someone has already successfully done that way, do not hesitate to share ;)
 
If someone has already successfully done that way, do not hesitate to share ;)

Hmm, just use Debian, install Samba, configure it and you're good to go. Nothing special if you just want to share stuff. Simple and plain Samba, thousands of tutorials available on the internet.
 
@LnxBil Yes, on the host...
No easy way to backup regularily the host config, samba shares, uses, groups ACLs etc...
So better to put samba in a LXC... but it doesn't work correctly. (with my tests)
 
@LnxBil Yes, on the host...
No easy way to backup regularily the host config, samba shares, uses, groups ACLs etc...
So better to put samba in a LXC... but it doesn't work correctly. (with my tests)

I meant in an LX(C) container. It works out of the box. What is your exact problem (not with turnkey, with a real installation)?
 
Hi @LnxBil

I've Not tried to use samba from scratch, only trough OMV (in a LXC) and Turnkey fileserver https://www.turnkeylinux.org/fileserver.
I had different problems, with managing acls, shares not showing on the network, errors popping etc. It was not confident, not "clean".
To speak about my "base project", it is to have something like a "synology" once finished, the less possible hands in the code. I already made a lot, with LXC's, Docker and portainer in an Ubuntu, OMV, Duplicacy, piHole, HASS.io(VM), Jeedom ... here is a screenshot.
All this with 16GB (Maximum on an HP G8 Microserver:rolleyes:) is great, thanks LXC !
1592689798776.png
I will then write a post with my setup when finished.
I'm able to manage samba trough conf files, but you know, for a home server ... I would prefer to have a neat browser based GUI. Not found anything yet.
What I'm doing now is ... installing Microsoft hyper-v 2019 core (which is free) and activate file sharing. it boots in seconds and use only 550Mb RAM (with balloon) and run very fast.
No GUI, only powershell (amazing piece of software that said), but I can manage users, shares etc trough the "manage computer interface" of my windows 10. I can "SSH" powershell and the windows powershell ISE is very neat and convenient also.
Up to now, it's the neatest solution I've found. :)
 
Last edited:
  • Like
Reactions: frik---
Hi @LnxBil

I've Not tried to use samba from scratch, only trough OMV (in a LXC) and Turnkey fileserver https://www.turnkeylinux.org/fileserver.
I had different problems, with managing acls, shares not showing on the network, errors popping etc. It was not confident, not "clean".
To speak about my "base project", it is to have something like a "synology" once finished, the less possible hands in the code. I already made a lot, with LXC's, Docker and portainer in an Ubuntu, OMV, Duplicacy, piHole, HASS.io(VM), Jeedom ... here is a screenshot.
All this with 16GB (Maximum on an HP G8 Microserver:rolleyes:) is great, thanks LXC !
View attachment 18019
I will then write a post with my setup when finished.
I'm able to manage samba trough conf files, but you know, for a home server ... I would prefer to have a neat browser based GUI. Not found anything yet.
What I'm doing now is ... installing Microsoft hyper-v 2019 core (which is free) and activate file sharing. it boots in seconds and use only 550Mb RAM (with balloon) and run very fast.
No GUI, only powershell (amazing piece of software that said), but I can manage users, shares etc trough the "manage computer interface" of my windows 10. I can "SSH" powershell and the windows powershell ISE is very neat and convenient also.
Up to now, it's the neatest solution I've found. :)
Very interesting! Have you wrote the promised post? thx!
 
My setup is with windows 2019 standard server "free" desktop evaluation
here are some links:
- https://gist.github.com/goffinet/36e6581670cde2d016f620f6f490d281
- https://serverfault.com/questions/9...yption-oracle-remediation-rdp-to-windows-10-p
- https://davejansen.com/recommended-settings-windows-10-2016-2018-2019-vm-proxmox/
- licenses KMS (trial without expiration): procedure: https://msguides.com/microsoft-software-products/windows-server.html
- download iso https://www.microsoft.com/en-us/evalcenter/evaluate-windows-server-2019-essentials
- burn on usb with https://www.microsoft.com/en-us/download/details.aspx?id=56485

Some note I made in my evernote:

Start:
  • install standard desktop editon
  • find network driver at boot install
    • Click on load drivers -> browse -> expand out the virtio iso -> expand NetKVM\2k16\amd64 and press OK
    • then You will be returned back to the same screen as fig. Repeat the process above for the following paths
      • Balloon\{windows edition}\amd64\ VirtIO memory balloon driver (optional, but recommended unless your server has plenty of RAM)
      • Netkvm (ethernet Virtio)
      • vioscsi va pas avec 2016 et 2019 ... the VirtIO SCSI driver
      • vioserial\{windows edition}\amd64
      • and lastly viostor\{windows edition}\amd64 the VirtIO block storage driver
    • ? qxldod — QXL graphics driver (if installing Windows 7 or earlier, choose qxl instead)
after boot:
  • Allow discoverable (W2K16)
  • sconfig.exe
  • Server manager
    • enable role-based feature file sharing
    • tools: manage computer
    • add User XXX
      • Admin remote management group
  • enable remote management
    • Go trough steps of Win 2016 SRV Step 1.ps1 with powershell ISE AS ADMINISTRATOR !!!
      • # enable psremoting if not in a domain
      • Enable-PSRemoting
      • #check adapter connectionprofile
      • Get-NetConnectionProfile
      • Set-NetConnectionProfile -InterfaceIndex 6 -NetworkCategory Private
      • # view current list of trusted hosts
      • Get-Item wsman:\localhost\client\trustedhosts
      • # add remote core server that you want to connect to
      • Set-Item wsman:\localhost\client\trustedhosts -Value XXXXX
      • #check host files (not needed normalement)
      • Get-Content -Path "C:\Windows\System32\drivers\etc\hosts"
      • Add-Content -Path "C:\Windows\System32\drivers\etc\hosts" -value "192.168.XXX.XXX XXXXX"
      • #add entire domain for delegation (handy if you have more than one host)
      • Get-WSManCredSSP
      • Enable-WSManCredSSP -Role Client -DelegateComputer "XXXXX"
      • #add credential for each computer
      • cmdkey.exe /list
      • cmdkey.exe /add:XXXX /user:Administrator /pass:XXXXXXXX
      • #server side:
      • Get-WSManCredSSP
      • Enable-WSManCredSSP -Role Server
      • Enter-PSSession -ComputerName XXXX
 
  • Like
Reactions: roboboard

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!