That would be a great feature. I have to be careful when adding/removing PCI cards as I won't be able to communicate with my headless servers any longer. Add in that I pass the igpu to a VM for transcoding, and a NIC rename puts me in quite a predicament.
I had the same issue with a privileged container.
Without nesting I got this error executing "docker run hello-world"
error mounting "proc" to rootfs at "/proc": mount proc:/proc (via /proc/self/fd/6)
With nesting enabled, it worked fine.
I'm surprised that this issue hasn't been solved yet. I'm planning on adding a GPU to one of my Proxmox servers but I'm holding off as it is a headless server. So, if the NICs on my PCI NIC get renamed , disabling network access, I won't be able to fix it.
Did you ever figure this out? I needed Jenkins to access CIFS network shares, so I tried a privileged container. My entire proxmox server would hang every time I started the LXC.
I switched it to a virtual machine and Jenkins runs fine there.
This fixed mine as well. My container is privileged, when I unchecked "unprivileged", it greyed out "nesting" so that I could not check it.
After the container is created, the nesting option checkbox is no longer greyed out, and can be selected.
I'm not sure if that behavior is by design or...
The shared IOMMU groups was the issue. I didn't notice that there were multiple devices in the same group until later. What was happening was when the VM with a PCI Passthrough card accessed the HBA card, it also took control of the Intel NIC, which is why the host lost access to it.
I added an...
I installed an Intel X520 10Gb network card into my proxmox server with the intention of making it a bridged interface for hosts to share. My PVE is version 7.1-7.
However, it doesn't appear in the "system->network" section of my pve node. However, if I go to a VM's "hardware->add pci device"...