Ok, Another Proxmox / Open Media Vault Post...

chefvoyardee

New Member
Mar 30, 2020
2
0
1
46
So...I've hosed and formatted my home server setup about 6 times since I started my home-lock down courtesy of the Dos Equis virus two weeks ago. Frankly, it's driving my wife nutso. Upgraded rig to i5 Intel 2.9 with 32gb DDR4 and about 8 TB on disk and 512 SSD spread between 6 drives. I started bare mental OMV, formatted it twice due to configuration screw-ups. Then I moved to Proxmox bare metal for better VM management since Cockpit was busted for me in OMV 5. Re-formatted Proxmox 3 times for both configuration, both for testing and bad Linux root commands and configs. I feel like my brain has been plugged into the Matrix at the rate I'm researching stuff, but I digress.
I have settled on Proxmox as bare metal for certain. I am becoming familiar with LXC containers, as they have very low overhead. My questions surround disk and file management. I am wresting with a few questions at this point:
  1. Is there truly an effective way to let OMV manage non-OS (media and files storage) physical disks without HD pass-through? I am under the impression that only one system can manage disks at any given moment, and I have three to potentially pass through (Proxmox or OMV). I also though that Proxmox backup and monitoring acitviites are hampered when doing pass-through. Can you pass through multiple disks to one VM/LXC?
  2. Is there still a lot of value using OMV if disk management is off the table? Does OMV really do it any better than Proxmox at this point?
  3. ZFS vs EXT4. I keep thinking KISS (keep it simple stupid). ZFS has CPU and significant RAM overhead. I think i'm good on ram, but I keep reading about nightmares for data restoration and recovery. But then i hear all about performance and data integrity potential. But on 8TB, is it worth it? I plan on backing up to an external drive, probably just the os for snapshot and storage drives as file backups. Is ZFS really worth it? Are there implication if I decide to move forward with OMV? I will be file sharing between Linux and Windows systems, btw.
  4. Is there an equivalent to OMV (GUI preferably) that works more natively with Proxmox, at least in regards to file/shares/backup management? I have Turnkey Fileserver LXC installed. It's ugly, not terribly intuitive, but with a bit of dedication and pain, it looks to get the job done in regards to file shares/permissioning management via Samba. Any opinions?
  5. Docker, LXC, or VM for Plex instance? All three exist at this juncture.
IF you haven't noticed, I'm having a hard time cutting out OMV altogether simply because I have a real liking for it. But part of me feels that my Proxmox structure could be somewhat compromised for a less than ideal interface between these two system. Plus part of me feel that I'm taking the easy way out by leaning on OMV rather than designing "proper" VMs and containers solely in Proxmox to accomplish a balanced environment.
Then again, I almost hosed one of my Proxmox deployments simply because I wanted to change the node name from "server" to "Server1". I'll never get those 4 hours back. But boy did i learn so much! Sorry for the rambling and thank you in advance for sharing your viewpoints. Stay safe!!!
 
Hi,

1.) Disk management is very simple in OMV. And as long you do not use ZFS in OMV there is no need to pass through the disk. Just for completeness, a vDisk in Qemu has overhead.

2.) OMV is Debian with the stock kernel. Proxmox is a Debian with a newer Ubuntu Kernel and self-maintained ZFS modules.

3.) ZFS is used by enterprises over 10 Years and all trust this technology.
This stories you can read over every FS on the marked.
ZFS needs about 4 GB + 1Gb per used TB storage space.

4.) I would recommend you to install OMV on top of a Debian Container. As underlying FS I would recommend you ZFS.
 
Dear @wolfgang, at point 4) when you say ZFS for OMV, are you meaning to use ZFS as Proxmox container storage for OMV?

At point 1) youve mentioned to not user ZFS with OMV, only if hes planning a passthroug

tks
 
4) when you say ZFS for OMV, are you meaning to use ZFS as Proxmox container storage for OMV?
Use ZFS of PVE and on to install an OMV CT.

At point 1) youve mentioned to not user ZFS with OMV, only if hes planning a passthroug
In his case related to his HW settings.
 
Use ZFS of PVE and on to install an OMV CT.
Hello Wolfgang,
and this is exactly my problem. OMV expects then under dev drives sda etc.
How can I fix this? Can I emulate this somehow? I have the latest version 6.3 in use.
Greetings Matthias
 
OK, there are several different ways of doing this

1. create a vm with multiple virtual drives and virtualise the entire OVM installation
+ easy to do
- reliant on proxmox host
2. pseudo-pass-through the virtual drives to a VM (proxmox handles the disk I/O but the data is written to the physical drive)
+ data is on the drive like it would be with a bare metal install
- performance penalty
- no smart data for the host
3. pass-through the hardware disk controller (the VM can talk directly to the drives)
+ almost the same as a bare metal install
- hardware dependent and can be difficult to setup
- hardware is exclusive to that VM and can't be shared
- potential for data loss if the pass-through fails?
4. forget about OMV and install Samba on the proxmox host and maybe use something like Webmin to manage your fileshares
+ best performance
+ scope for sharing resources and data with other VMs and Containers
+ data is on the drive
- takes a bit more work to set up
- no nice GUI
 
possibility.............
As OMV can be installed on debian, perhaps you could install OMV on the proxmox host?
It might work? I don't know if anyone has tried that?
 
possibility.............
As OMV can be installed on debian, perhaps you could install OMV on the proxmox host?
It might work? I don't know if anyone has tried that?
@bobmc
No, I want to leave the Proxmox host untouched. Additional software only makes the recovery process more difficult.
OK, there are several different ways of doing this

1. create a vm with multiple virtual drives and virtualise the entire OVM installation
+ easy to do
- reliant on proxmox host
2. pseudo-pass-through the virtual drives to a VM (proxmox handles the disk I/O but the data is written to the physical drive)
+ data is on the drive like it would be with a bare metal install
- performance penalty
- no smart data for the host
3. pass-through the hardware disk controller (the VM can talk directly to the drives)
+ almost the same as a bare metal install
- hardware dependent and can be difficult to setup
- hardware is exclusive to that VM and can't be shared
- potential for data loss if the pass-through fails?
4. forget about OMV and install Samba on the proxmox host and maybe use something like Webmin to manage your fileshares
+ best performance
+ scope for sharing resources and data with other VMs and Containers
+ data is on the drive
- takes a bit more work to set up
- no nice GUI
My Proxmox host is only accessible via the management VLAN, so I don't want Samba on the host. Currently I use Samba pure in a LXC container. Without Webmin, I do not like it. Only with CLI and very easy to configure. I only have configuration changes from time to time, so it is hard for me to find my way back into the CLI config of Samba. Therefore an intuitive GUI in LXC would be nice! And a nice to have would be the AD connection....
 
  • Like
Reactions: lixaotec

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!