If I do a vanilla Proxmox install on bare metal, create an LXC container (using the Debian-10.0 standard template), I am able to set up eth0 as either static or DHCP within the container, and it bridges to my network without issue (I am able to ping it, ssh, etc.).
If I do a vanilla Proxmox install as a Hyper-V Guest, create the same container, exact same config, I cannot access it by its network interface on the Proxmox guest or on my network, whether staticed or DHCP (DHCP fails).
The Proxmox guest is connected to a Hyper-V virtual switch that is connecting to my network properly, as I can access the web interface and ssh, so I'm guessing either Proxmox configured itself differently based on being a VM, Hyper-V is limiting the network interface somehow, or there is some kind of extra bridging step involved in bridging a 2nd layer guest through a 1st layer guest to the host.
Has anyone been able to set up a Proxmox guest on a Hyper-V host and get an LXC container set up with a network interface that will pass through to their network?
If I do a vanilla Proxmox install as a Hyper-V Guest, create the same container, exact same config, I cannot access it by its network interface on the Proxmox guest or on my network, whether staticed or DHCP (DHCP fails).
The Proxmox guest is connected to a Hyper-V virtual switch that is connecting to my network properly, as I can access the web interface and ssh, so I'm guessing either Proxmox configured itself differently based on being a VM, Hyper-V is limiting the network interface somehow, or there is some kind of extra bridging step involved in bridging a 2nd layer guest through a 1st layer guest to the host.
Has anyone been able to set up a Proxmox guest on a Hyper-V host and get an LXC container set up with a network interface that will pass through to their network?
Last edited: