Sharing Necessary Hardware with LXC and VM

lampbearer

Active Member
Dec 10, 2017
12
7
43
59
30 year computer guy has spent weeks playing and reading the fine manual and still has virtualization questions for people a whole lot smarter than I am:

Short version - I need help sharing hardware (details below): ZFS to containers, Optical drives to any guest os (VM or LXC), dogging down hardware errors displayed on server boot, enabling drive health monitoring and admin notification, process to virtualize windows 10, and considerations of docker (pros and cons) -- I know, just a few newbie lightweight questions (ha-ha).

Trying to do a multi-use home server using proxmox instead of Windows 10 that was doing raid, file serving, content creation, etc.. Have seen others doing same (servethehome is one site), but NO ONE has a full writeup on the whole thing nor do they deal with some of the thorniest issues. Mine are all hardware passthrough related:

1. Proper way to share a zfs raid array defined in proxmox itself with containers?
I settled on using mountpoints (mp0, etc) in the LXC definition -- is this the best way? (I read dozens of pages with a half-dozen different ways). I want my FileServer that runs SAMBA to run in a container so there is some isolation from the host -- using TKL fileserver -- but should I be using some NAS image or Rockstor? Had issues with OMV and FreeNAS because of hardware assumptions.
2. How to automount and share a CD/DVD/BluRay drive -- I've got this working on the host (kind of -- had to put some entries in fstab that I think work right -- not sure of the parameters to use. I have 2 drives and they basically are mounting when I insert disks (x-systemd.automount comes into play). But now I want them shared with containers again -- Ubuntu in a container that I want to use as a workstation. Share as a block device? Use nbd? Samba share them? Tried some stuff in the LXC definition and container wouldn't start if no disk in the drive so that's out -- tried block device and mp. Solution has to be able to support insertion/removal while the package is up and running.
3. How to resolve hardware errors -- I was getting some strange stuff -- pata_jmicron had 2 devices on it, only one mounted. Couldn't figure that out and had to move the drive to a different interface -- worked fine in windows and bios. Still getting lots of random ata timeout errors that I can't trace down. Just too involved -- what I read says "linux hw drivers are just supposed to work" -- but that probably isn't true on a system where the MOBO alone has 3 different scsi controllers and I've got a third plugged in as a PCI card -- it's a Gigabyte MOBO with 8 SATA connectors onboard 6 to intel, 2 to jmicron and the pata bus. Some basic step-by-steps for me to read more on would be appreciated -- I've spent weeks scrounging and not getting a process to pin them down and I need RELIABLE not FLAKY. Pretty sure I still have quirks waiting to bite me later.
4. How to enable auto-notification of some sort when ZFS and/or drive SMART becomes an issue -- the whole idea is to have things be bullet proof -- but you have to be able to get emailed or notified somehow when thing start getting corrupt not AFTER it all fails. Having trouble figuring this part out. Example: Booting proxmox off mirrored flash drives was serious bad mojo -- one or the other was always going offline and showing a corrupt mirror array, but now I am booting off a single HD that at some point I need to mirror for security. Dreading it and need something solid with an easy failover -- nice to clone proxmox itself to a flash drive as a failsafe to mirrored hard drives.
5. Trying to run Windows 10 in a VM and have it be able to access advanced stuff like NVidia CUDA, sound cards, the aforementioned ZFS raid drive, USB hot-insertions, DVD/BD -- you know, pretty much what hardware DOES. And part of this is being able to virtualize an existing install such that windows can re-detect hardware and NOT have to re-register the OS from the old hardware to the new virtualization. Seen lots on VirtIO that I can't quite pin down a process, and about 10 other partial writeups -- nothing really definitive. Again, any help even general direction rather than specifics really appreciated.
6. Lastly -- to Docker or not to Docker? Have seen pros and cons -- and yes, I understand the basic differences with containers, but it is just so darned convenient compared to having to create your own due to the limited containers currently released -- but do you bugger up the security of the main proxmox install or try to fight to get docker into a VM or on an LXC or just give up altogether because the performance might be horrible due to all the layers and passthroughs? Thoughts?
 
  • Like
Reactions: 2malH
2. How to automount and share a CD/DVD/BluRay drive -- I've got this working on the host (kind of -- had to put some entries in fstab that I think work right -- not sure of the parameters to use. I have 2 drives and they basically are mounting when I insert disks (x-systemd.automount comes into play). But now I want them shared with containers again -- Ubuntu in a container that I want to use as a workstation. Share as a block device? Use nbd? Samba share them? Tried some stuff in the LXC definition and container wouldn't start if no disk in the drive so that's out -- tried block device and mp. Solution has to be able to support insertion/removal while the package is up and running.
Would like to bring this thread back to life, instead of starting a new one. Even though I do not think this is EXACTLY what I am looking for, it is pretty close.

Are we able to pass DVD drives connected via USB to the host thought to VMs and/or LXCs?

I have two DVD drives that I would like to connect to my host and then create two MakeMKV containers to pass a drive through to each. I currently have a NUC I use with one drive connected, but being able to do two at once (and skip some of the file transfers in the process) would really expedite backing up my collection.

Everything I see is mentions PCI/PCIe connectors and/or is for HDD/Storage mounts.

Any input is greatly appreciated!
 
I'm not sure if anyone saw this back in 2017 - and I hope the OP has found a solution.


1. Proper way to share a zfs raid array defined in proxmox itself with containers?
I settled on using mountpoints (mp0, etc) in the LXC definition -- is this the best way? (I read dozens of pages with a half-dozen different ways). I want my FileServer that runs SAMBA to run in a container so there is some isolation from the host -- using TKL fileserver -- but should I be using some NAS image or Rockstor? Had issues with OMV and FreeNAS because of hardware assumptions.
-> It depends on the 'level of security/control' you want - there are 2 main ways:

1. Let Proxmox manage the ZFS and mount the array directly to the LXC - probably simplest (can be done via GUI), but this is a single point of security (as any breach of the LXC protection = full access to the whole array etc.)

2. Create a VM NAS and share folders via NFS or SMB - more tedious to set up and you have to make an early decision (see below) - the benefit of this is that you can create specific user login/password for each VM/LXC and then control access via ACLs (in the NAS VM)

The dilemma is who should manage the ZFS - you can either let the VM OS do it by directly passing through disks (which is probably faster), if the NAS OS supports ZFS (TrueNAS yes, OMV only via plugin etc.) - or let Proxmox manage ZFS and mount a VHD to the VM OS (more configuration/complexity, but easier to backup?)


2. How to automount and share a CD/DVD/BluRay drive -- I've got this working on the host (kind of -- had to put some entries in fstab that I think work right -- not sure of the parameters to use. I have 2 drives and they basically are mounting when I insert disks (x-systemd.automount comes into play). But now I want them shared with containers again -- Ubuntu in a container that I want to use as a workstation. Share as a block device? Use nbd? Samba share them? Tried some stuff in the LXC definition and container wouldn't start if no disk in the drive so that's out -- tried block device and mp. Solution has to be able to support insertion/removal while the package is up and running.
-> passing hardware to LXCs = pain (v. VM which can be done in GUI)

You need to edit the LXC config file, which is located at (on the host): /etc/pve/lxc/<LXCID>.conf

The new method of device passthrough is by using device mapping (dev):
Code:
dev0: /dev/<yourdevice>,gid=<GID>,uid=<UID>

ie. to passthrough say a TV tuner you add the following lines
Code:
dev0: /dev/dvb/adapter0/demux0,gid=44,uid=0
dev1: /dev/dvb/adapter0/dvr0,gid=44,uid=0
dev2: /dev/dvb/adapter0/frontend0,gid=44,uid=0
dev3: /dev/dvb/adapter0/net0,gid=44,uid=0

- special thanks to leesteken for discovering this (and actually read the PVE documentation!)

The older method was to use cgroup2, idmap and/or chown - but this method doesn't survive proxmox reboot (=pain)


Restart the LXC and you should now have access to your passthrough hardware


Note that this way of LXC 'passthrough' requires you get the devices working on the host first (ie. set up all the firmware/libraries/dependencies etc. on proxmox) - then pass through the /dev/<yourdevice> to the LXC

Unlike a full VM, you can't just pass through a single /dev/bus/usb etc. - the LXC doesn't have the kernel modules to work this way

-> remember that this method of passing host devices to LXCs risk 'contaminating' your proxmox install with other files
ie. firmware binaries (often from non-FOSS sources), extra libraries (graphical stuff like mesa) or anything you download using curl/wget :p, which may 'break' when you upgrade proxmox/debian/linux kernel at a later date

whereas using VMs keep all the 'dirty' files inside the guest only (separate from proxmox host files)


-> It is probably easier to just give the cd drive to a VM NAS (which has the right/easier tools to do this) and then share it out via NFS/SMB
-> basically, passing through hardware to LXC = pain


3. How to resolve hardware errors -- I was getting some strange stuff -- pata_jmicron had 2 devices on it, only one mounted. Couldn't figure that out and had to move the drive to a different interface -- worked fine in windows and bios. Still getting lots of random ata timeout errors that I can't trace down. Just too involved -- what I read says "linux hw drivers are just supposed to work" -- but that probably isn't true on a system where the MOBO alone has 3 different scsi controllers and I've got a third plugged in as a PCI card -- it's a Gigabyte MOBO with 8 SATA connectors onboard 6 to intel, 2 to jmicron and the pata bus. Some basic step-by-steps for me to read more on would be appreciated -- I've spent weeks scrounging and not getting a process to pin them down and I need RELIABLE not FLAKY. Pretty sure I still have quirks waiting to bite me later.
-> can't help here as 'I don't have the same hardware as you'

SATA / SCSI / USB drives normally 'just work in linux' - as everything labelled /dev/sdx is basically means the device talks scsi commands (like how USB mass storage device means you don't need to install additional drivers)

What probably isn't working is your SCSI card - linux is picky about the exact controller that is on your card (rather than the brand that is on the box/sticker) - this happens a lot for wireless devices (WiFi/BT) actually, normally what's missing is the firmware binary that needs to go into /lib/firmware (if it's not FOSS, it doesn't get included into Debian by default) - sometimes it can be a wild goose chase on the web through google/debian packages/private repos if the manufacturer doesn't include a linux driver file

Unfortunately some cards just don't work because the mfg only cares about Windows users and doesn't want to release drivers/firmware - there are brands that you 'just avoid' for WiFi cards (not naming names here - but if you know you know)

NVMe generally works OOB if the drive goes directly on the motherboard

PATA/IDE support may have got dropped at some point (when HDDs changed from /dev/hdx to /dev/sdx) - SATA has been around since the 2000s, and it has been the default (and still is for non-OS bulk storage)

CD/DVD support is generally good if it is SATA due to the same principles above (idk if that's the case for IDE) - but most people just write ISOs to USB sticks now rather than stick to optical media


4. How to enable auto-notification of some sort when ZFS and/or drive SMART becomes an issue -- the whole idea is to have things be bullet proof -- but you have to be able to get emailed or notified somehow when thing start getting corrupt not AFTER it all fails. Having trouble figuring this part out. Example: Booting proxmox off mirrored flash drives was serious bad mojo -- one or the other was always going offline and showing a corrupt mirror array, but now I am booting off a single HD that at some point I need to mirror for security. Dreading it and need something solid with an easy failover -- nice to clone proxmox itself to a flash drive as a failsafe to mirrored hard drives.
-> VM NAS OS normally have this stuff built-in
-> there is probably a way you can do this on proxmox, but you'd have to do it yourself with other packages (as proxmox isn't a NAS)


5. Trying to run Windows 10 in a VM and have it be able to access advanced stuff like NVidia CUDA, sound cards, the aforementioned ZFS raid drive, USB hot-insertions, DVD/BD -- you know, pretty much what hardware DOES. And part of this is being able to virtualize an existing install such that windows can re-detect hardware and NOT have to re-register the OS from the old hardware to the new virtualization. Seen lots on VirtIO that I can't quite pin down a process, and about 10 other partial writeups -- nothing really definitive. Again, any help even general direction rather than specifics really appreciated.
-> the only way (I know) atm to get a Windows VM to work is hardware passthrough with IOMMUs, there are lots of guides on how to do that with GPUs for people who virtualise gaming setups

-> the issue is you essentially 'lock' that piece of hardware to that specific VM when its running (you can do offline transfers of the GPU etc.)

-> on Linux VMs there is the possibility of doing VirtGL - but this is all a bit experimental atm (and doens't work for games)


6. Lastly -- to Docker or not to Docker? Have seen pros and cons -- and yes, I understand the basic differences with containers, but it is just so darned convenient compared to having to create your own due to the limited containers currently released -- but do you bugger up the security of the main proxmox install or try to fight to get docker into a VM or on an LXC or just give up altogether because the performance might be horrible due to all the layers and passthroughs? Thoughts?
-> the age old debate of VM v. LXC v. Docker - basically do you value security (VM) or ease of scaling (docker)

VMs provide virtualisation (and isolation) at the 'hardware' level - most secure, but more resource intensive/less efficient utilisation (pre-allocating hardware CPU cores/RAM/storage etc. which generally can only be changed when the VM is offline), allows direct hardware pass through (which is as simple as a few clicks on the GUI interface), guest and host OS needs to be separately updated (for security fixes)

LXCs provide virtualisation at the kernel level - all your LXCs can share the resources to better utilise the hardware (scaling use up/down as process requires rather than relying on your estimating how much you need), and allow some form of pass through (with the pain above), 'guest' and host needs separate updates

Dockers provide virtualisation at the docker engine level - and generally is more suitable for stuff that needed to be switched online/offline or scaled quickly (as processes generally communicate via network ports/mounted file systems), updates depend on the docker repo system, which is great for pulling 'latest images' of popular apps, but good luck getting hardware passthrough :)

I can't help you decide what's best, as only you can determine what is the right balance of scalability v. security, but generally:
- if an app does one thing, accesses most things via network ports, doesn't require complex hardware and requires scaling - docker
- if you need to do multiple things, need access to other stuff/complex hardware passthrough - then either VM or LXC

You can get some sort of security by running Docker engine inside a VM (with a performance penalty) if you're really worried about security
 
Last edited: