No access to nested ESXi 7.0.3 host after upgrade to 8.1 - Connection between nested host works!

Oh drat. I tried Proxmox8.2.2 [Linux 6.8.4-2-pve (2024-04-10T17:36Z)] on VMWare workstation as a test - set as nested VMWare ESXi v8 or above and *everything* worked perfectly. I then setup a new install of pve on my VMWare ESXi 8.U2 (simply testing but better kit to work with) and, again, all worked fine except as above - couldn't make it see the network - it gets an IP from DHCP (so is seeing other hosts/dhcp server) and the right subnet/gateway are all correct - but the Win2022 VM (which installed fine - all with the virtIO drivers (and the drivers installed inside windows, there are no issues with devices in CP) says "no internet access" and unidentified network. It can't ping it's own gateway. I can't change the VMXNET3 adapater as that's the only one available for this VM when set to ESXi 8> so a bit stopped. Interestingly, I was testing Xenserver 8 and it had exactly the same issue so I'm guessing it's the VMXNET3 that's the issue here. Bit stuck at the mo - but still preferring Proxmox over Xenserver as seems much more highly configurable.

OK, got it to work (just) - but not sure whether this is impactful:

1) Important: In VMWare, under Networking | Virtual Switches | Edit vSwitch0 | Set Security | Promiscuous Mode to Enabled
2) Change the OS Type in VMWare for the server from Other/VMWare to Linux/Debian10
3) Change disk controller to LSI Logic SAS
4) Within the ESXi Host leave the network adapter as VMXNET3 (drop down is selectable to E1000 but won't save)
5) Make sure vmbr0 VLAN is off

Above only needs doing once.

6) Power on Host, network is probably disabled.
7) Change Ethernet for VM to E1000E save, then set back to VirtIO (paravirtualized) and set MTU to 1 (same as host)
8) Go back to network and disable/enable adapter. It should then show up with proper network connectivity - you might have to do this a few times.

However, you might find you have to do a ipconfig/release&&ipconfig/renew or change NIC back and forth with enable/disable in-between.
It also won't survive a reboot - you have to do 6-7-8 again.


Definitely flakey! But happy I could get a 10Gbps adapter as opposed to 1Gbps. Also, this isn't supported in production anyway (nested) but good enough for testing !
 
Last edited:
  • Like
Reactions: advark and B-C
Just to add to this - did a conversion from ESXi today - worked perfectly except the NIC - but did the above and, now, on each reboot just need step 7 (enable/disable adapter) and it works perfectly. Just wait til it's got an IP from DHCP or fix it. However, it's the same each reboot. Was also very impressed with the ESXi conversion too - worked very well. Been using ESXi for years and hugely disappointed with the Broadcom stuff going on - but this looks like a very decent replacement !! Performance wise. don't see any major issues either especially when backed by 15k Dell PERC and RAID6 and 10GbE networking. Well done Proxmox !!
 
OK, got it to work (just) - but not sure whether this is impactful:

1) Important: In VMWare, under Networking | Virtual Switches | Edit vSwitch0 | Set Security | Promiscuous Mode to Enabled
2) Change the OS Type in VMWare for the server from Other/VMWare to Linux/Debian10
3) Change disk controller to LSI Logic SAS
4) Within the ESXi Host leave the network adapter as VMXNET3 (drop down is selectable to E1000 but won't save)
5) Make sure vmbr0 VLAN is off

Above only needs doing once.

6) Power on Host, network is probably disabled.
7) Change Ethernet for VM to E1000E save, then set back to VirtIO (paravirtualized) and set MTU to 1 (same as host)
8) Go back to network and disable/enable adapter. It should then show up with proper network connectivity - you might have to do this a few times.

However, you might find you have to do a ipconfig/release&&ipconfig/renew or change NIC back and forth with enable/disable in-between.
It also won't survive a reboot - you have to do 6-7-8 again.


Definitely flakey! But happy I could get a 10Gbps adapter as opposed to 1Gbps. Also, this isn't supported in production anyway (nested) but good enough for testing !
Thanks Mike,
I've been pulling my hair off for the last 2 f@%#& days!
 
  • Like
Reactions: mike.spragg
Has anyone found a solution to this yet? I have an ESXi 7.0 U3 and 8.0 nested VM that boot but still don't have network access after all of this because the vmxnet3 adatpers are picked up but don't get any network connectivity. Running proxmox VE 8.2.4
 
Has anyone found a solution to this yet? I have an ESXi 7.0 U3 and 8.0 nested VM that boot but still don't have network access after all of this because the vmxnet3 adatpers are picked up but don't get any network connectivity. Running proxmox VE 8.2.4
Solution is still the same - it got a bit better in 8.2.4 but still find it an issue. Don't forget to set promiscuous mode as per instructions, step 1. Then you need to toggle on/off adapter in both the local host and the server (network connected/disconnected). It does "latch" - much quicker in 8.2.4 than before. Def no good for production but you wouldn't run it in production anyway like this.
 
Solution is still the same - it got a bit better in 8.2.4 but still find it an issue. Don't forget to set promiscuous mode as per instructions, step 1. Then you need to toggle on/off adapter in both the local host and the server (network connected/disconnected). It does "latch" - much quicker in 8.2.4 than before. Def no good for production but you wouldn't run it in production anyway like this.
How are you getting to the VMWare web interface if the nested ESXi host VM itself is inaccessible from the Proxmox network?

Edit: To clarify, in my situation it is the ESXi hosts themselves that are not getting network connectivity while being run as nested VMs in Proxmox. It sounds the steps you have are for getting network connectivity to VMs running under ESXi. ie Proxmox PVE > ESXi > VMs
 
Last edited:
How are you getting to the VMWare web interface if the nested ESXi host VM itself is inaccessible from the Proxmox network?

Edit: To clarify, in my situation it is the ESXi hosts themselves that are not getting network connectivity while being run as nested VMs in Proxmox. It sounds the steps you have are for getting network connectivity to VMs running under ESXi. ie Proxmox PVE > ESXi > VMs
Oh I see - other way around ! Sorry, see what you mean but haven’t tried this way around - sorry.
 
I'm amazed this is still an issue. I just fully reloaded my proxmox host (Epyc board running 7551P and 128GB of ram on an AsRock board) with Proxmox 8.2.4 and kernel 6.8.12-1. I also reloaded my ESXi nested VMs with 8.0.2. The issue is still the same. I can ping and access the ESXi hosts from an outside PC. I have stood up a vCenter on one of the ESXi hosts. I have stood up Veeam on a VM in proxmox. Veeam can talk to vCenter but cannot talk to the hosts directly. VMXNet3 just refuses to work. I've swapped down to E1000E which I can now ping from Veeam, but zero communications beyond that. Hopefully someone will find the fix soon.
 
Anyone have a solution for this? I am trying 2 nested esxis. But i cannot ping them from proxmox itself or any other VM on the same bridge. No vlans. Using e1000E is a bit better, but still very unstable as well. Already enabled promiscuous mode on all nics. Ironically, accesing the web interfaces works from others pcs.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!