PVE nested in HyperV, PVE gets network access but PVE vm's do not (destination host unreachable). Looking for guidance to set promiscuous mode in HV.

elzorroazul777

New Member
Dec 7, 2022
11
0
1
It looks like the topic of port monitoring is a bit above my current level of understanding and I haven't been able to find a concrete guide to help set this up. I have Hyper-V enabled on my Windows host and Proxmox is installed as a VM in Hyper-V. Proxmox was able to get a DHCP assigned IP address from the physical router on my local network but the VM's within proxmox are unable to connect.

Any guidance would be greatly appreciated!
 
Thank you so much for your question!! I spent half a day beating my head against the monitor trying to solve this. Then I saw the word 'promiscuous', and it immediately clicked. Here, it fixed everything immediately:
download.png
UPD: BTW, you also might want to change the adapter to trunk mode:

Code:
Set-VMNetworkAdapterVlan –VMName my_pve_vm_name –Trunk –AllowedVlanIdList 10-100 –NativeVlanId 0
 
Last edited:
Just a note - if your Hyper-V host is setup with a Trunk port (so that you can set VLAN on the VM level) then so far i have not found a way to make this work. With MAC spoofing my VM gets a DHCP IP just fine, but it still cannot ping anything outside of the VM network on the Hyper-V host. If you set the Hyper-V (Proxmox) VM adapter to Trunk using PS as above - then you are Trunking over a Trunk and Hyper-V loses its mind.

I am still fighting this battle, but I think the next step is to add a traditional access port to the Hyper-V physical host and enable MAC spoofing, that should allow the nested/nested Proxmox VM to communicate.
 
Adding an additional physical NIC to the Hyper-V host in ACCESS mode, not in Trunk mode as the other nic is, instantly resolved the problem. I now have my POC ceph cluster ready to test.