vm's can't access network resources outside of the PVE environment.

Jun 2, 2024
15
1
3
In light of Broadcom's idiotic behaviour I have decided to switch from VMware.

I'm attempting to evaluate Proxmox products but don't have any spare bare metal right now so I've installed PVE/PBS/PMG as vm's inside of a VMware environment where I had plenty of spare cpu/ram/disk to dedicate to this test. Everything installed and seemed to work fine, vm's were all accessible but in hindsight if I move ahead with Proxmox as a bare metal hypervisor solution the PBS/PMG vm's on VMware don't make sense so I re-created them as vm's in the PVE. Everything seemed to be fine and the vm's re-installed no problem. When I tried to access the vm's with a PC on the main LAN I could not access the PMG/PBS vm's like I could before even though they have IPs on the same LAN subnet as the PC I'm using and PVE vm. Thought this may be a wierd quirkyness with the PVE server using VMXNET3 adapter so I recreated all of this environment using Intel E1000 virtual adapter on the PVE server vm with same results. I don't know the PBS or PMG products or Linux in general as well as a Windows so I created a Winders server vm inside the PVE environment. After going thru the setup the Windows install sets the nic as dhcp as default. I logged in to Windows thru the PVE vm console and tried to access the PBS and PMG vm's and that worked fine. The Windows server didn't have internet access and it could not access any network device outside of the PVE environment. When I checked the ipconfig at command line it did receive valid dhcp information from my dhcp server which is outside of the PVE vm. Its ip address made sense for next available and gateway / DNS servers were also accurate. So... somehow the PVE server allowed the Windows vm to pull a dhcp address but not all it to fully communicate with that network after that? This makes no sense to me now. I found the PVE firewall setting and also the firewall setting for each of the vm's inside PVE and verified they were all off. The Windows vm firewall has all been turned off.

It's like the PVE server is not allowing the vm's to communicate past PVE itself? I did as much googling as I could and I found articles about setting the /proc/sys/net/ipv4/ip_forward file and setting it to 1. I Rebooted everything, no luck.

I also added this line to the interfaces file

post-up echo 1 > /proc/sys/net/ipv4/ip_forward

Rebooted everything again, still no luck.

I'm stumped so far. Probably something small I'm missing but at the same time, such a basic setup and test, I'd assume out of the box a PVE vm should be able to access the internet or other resources past the PVE server?

Thanks in advance.
 
Last edited:
Depends on your setup.

How does the output of ipconfig /all look like on Windows?

Can you also send me the output of the following commands on the PVE host?
Code:
cat /etc/network/interfaces
ip a
qm config <vmid>
 
Depends on your setup.

How does the output of ipconfig /all look like on Windows?

Can you also send me the output of the following commands on the PVE host?
Code:
cat /etc/network/interfaces
ip a
qm config <vmid>
Here are screenshots for the requested info. Thanks for the reply!
 

Attachments

  • Screenshot_20240603_071751_Microsoft Remote Desktop.jpg
    Screenshot_20240603_071751_Microsoft Remote Desktop.jpg
    172 KB · Views: 12
  • Screenshot_20240603_071856_Microsoft Remote Desktop.jpg
    Screenshot_20240603_071856_Microsoft Remote Desktop.jpg
    135.3 KB · Views: 12
  • Screenshot_20240603_071944_Microsoft Remote Desktop.jpg
    Screenshot_20240603_071944_Microsoft Remote Desktop.jpg
    375.2 KB · Views: 9
  • Screenshot_20240603_072110_Microsoft Remote Desktop.jpg
    Screenshot_20240603_072110_Microsoft Remote Desktop.jpg
    138 KB · Views: 11
  • Screenshot_20240603_072208_Microsoft Remote Desktop.jpg
    Screenshot_20240603_072208_Microsoft Remote Desktop.jpg
    99 KB · Views: 13
Just making sure: Is PVE now installed bare-metal or ist it nested within VMWare with the current setup?

How are you testing connectivity? Can you ping the gateway from within the VMs? (ping 10.63.14.66)
 
This config is inside the VMware environment still. I don't have any spare bare metal for this evaluation right now.

So the Vm's hosted by the PVE (10.63.24.99) are 10.63.24.100,101 and 102). Those 3 vm's cannot Ping the main gateway .66. But they can talk to each other including to the PVE on .99. While on the network I can connect to PVE on .99 but not the vm's hosted by it.

I'm still very baffled how the windows sever .102 was able to pull a dhcp address and DNS settings from the dhcp server on .71 but I can't ping it or communicate with it after the dhcp pull... Strange...

In considering this could be a VMware issue I had some spare resources (not bare metal) on a high end gaming system at home. It's a Windows 10 PC that I enabled Hyper-V on. I installed PVE as a Hyper-V vm on it and configed the Hyper-V networking to be public network so I can access the PVE server on the home Lan. I created similar vm's on PVE at home and have exact same results. I can access the PVE vm from the Lan but no vm's hosted by VE. The PVE vm's can talk to PVE and each other but nothing past PVE. The Windows vm didn't get a dhcp in the Hyper-V setup. I had to statically assign in order to talk with the other pve vm's.

Perhaps testing PVE virtually will not work? (VMware or Hyper-V)
 
That is a bit weird indeed, particularly since you are able to obtain a DHCP lease, but since the VMs are able to talk to each other via vmbr0, I suspect that something is going wrong when sending the traffic outside.

You could try to tcpdump on the bridge port (ens224) and check if any packets go successfully outside when pinging the gateway. You can do this via the following command:

Code:
tcpdump -envi ens224 icmp

Would be interesting to see if any packets show up there.
 
Last edited:
I just found the fix. Someone on proxmox reddit group suggested a link to try. This is essentially "nested virtualization" and the VMware environment wasn't allowing the other Mac addresses (vm's created within PVE server) coming thru the PVE vm itself.

Relaxing the vSwitch settings in VMware got it all working. See screenshot.

I don't know Hyper-V at all but willing to bet my home PVE test setup not working issue is similar to VMware.

Thanks for your help Stefan!
 

Attachments

  • Screenshot_20240604_043119_Chrome.jpg
    Screenshot_20240604_043119_Chrome.jpg
    407.9 KB · Views: 36
  • Like
Reactions: chlspvh
Great that you found the fix, good to know that you need to explicitly enable this on VMWare VSwitches.

Also, it might make sense to disable ip_forwarding again if you do not need it since this makes your host act as a router which might be a security issue depending on who else has access to your local network.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!