Great post! I didn't know these feeds existed until I saw your post. I added them to my RSS reader category for software releases. It's good to have a single place to look for updates to software that I use.
That would be a great feature. I have to be careful when adding/removing PCI cards as I won't be able to communicate with my headless servers any longer. Add in that I pass the igpu to a VM for transcoding, and a NIC rename puts me in quite a predicament.
Did you need to configure the integrated GPU as the default in BIOS for PVE to only use that one for the OS/Server? And then the discrete GPU was available for use by VM's ?
I had the same issue with a privileged container.
Without nesting I got this error executing "docker run hello-world"
error mounting "proc" to rootfs at "/proc": mount proc:/proc (via /proc/self/fd/6)
With nesting enabled, it worked fine.
I'm surprised that this issue hasn't been solved yet. I'm planning on adding a GPU to one of my Proxmox servers but I'm holding off as it is a headless server. So, if the NICs on my PCI NIC get renamed , disabling network access, I won't be able to fix it.
Did you ever figure this out? I needed Jenkins to access CIFS network shares, so I tried a privileged container. My entire proxmox server would hang every time I started the LXC.
I switched it to a virtual machine and Jenkins runs fine there.
This fixed mine as well. My container is privileged, when I unchecked "unprivileged", it greyed out "nesting" so that I could not check it.
After the container is created, the nesting option checkbox is no longer greyed out, and can be selected.
I'm not sure if that behavior is by design or...
The shared IOMMU groups was the issue. I didn't notice that there were multiple devices in the same group until later. What was happening was when the VM with a PCI Passthrough card accessed the HBA card, it also took control of the Intel NIC, which is why the host lost access to it.
I added an...
I installed an Intel X520 10Gb network card into my proxmox server with the intention of making it a bridged interface for hosts to share. My PVE is version 7.1-7.
However, it doesn't appear in the "system->network" section of my pve node. However, if I go to a VM's "hardware->add pci device"...
This is so easy, it should be illegal.
Thanks for sharing this info. I have been running docker inside vm’s because I thought that was the only way to do it.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.