Networking anomalies migrating from VMWare to Proxmox VE

jeversole

New Member
Aug 26, 2024
4
1
3
Hello PVE Community,

First post from a Proxmox newbie. I'm in the tedious process of migrating from a VMWare environment to a Proxmox VE environment. This will include hardware and storage upgrades. I'm hoping you kind souls can help me troubleshoot.

What I'm encountering is once VMs are moved and their IP addresses are corrected to reclaim the fixed addresses they had in the old environment, they don't quite behave the way they should and there are communication issues with some services that make calls to these VMs. For instance, I have 2 domain controllers that serve Active Directory and DNS. Once moved to PVE, I corrected the ethernet adapter settings to match the settings in the old environment verbatim. I can ping the IP addresses from any machine on the network. Yet anything that points to those IP's as their DNS servers fail to communicate with them. In another instance, I have a pfSense VM setup for our VPN service. I correct all network settings to match the previous environment but the public IP shows offline. I have a Nagios Netmonitor set up but even though the VMs are on the domain and can be pinged, Netmonitor doesn't see them. Same with Stratodesk NoTouch Center that links thin clients to a server farm. Once moved from VMWare to PVE, it doesn't see them even though the configuration is the same and the IP addresses are live. Can anyone suggest what might be throwing things off?

Here is a breakdown of my old and new environments:

Old (VMWare)
-Running vSphere, each host in the cluster running ESXi 6.7
-Runs on 3 Dell PowerEdge R720s with Intel Xeon E5-2660 (0) processors (8 cores, 16 threads x2), 256 RAM each
-Uses NFS storage on two servers (One SSD array, One SAS HDD array)

New (Proxmox VE)
-Running a PVE cluster with 3 nodes
-Runs on 3 Dell PowerEdge R7525s with AMD EPYC 7542 processors (32 cores, 64 threads x2), 512 RAM each
-Using same NFS storage for migration. Once migration is complete, the 3 servers from the old cluster will become a Ceph storage cluster

Network:
-Spectrum Managed Network Edge
-Cisco Meraki MX85 Router and switches
-LAN, DMZ, VoIP networks are VLANs in the Meraki router
-SAN runs outside that environment on a 10G Netgear switch.

In PVE, I seem to have everything correct with bridges and VLANs. Below, the pic on the left is the Meraki VLANs, the right PVE. Any machine I've moved over that serves a utility purpose such as PDQ Deploy work just fine and can talk to physical machines on the network or VMs in the PVE or VMWare environments. The issues seem to come with anything that is monitoring from outside (DMZ or public). I have one webserver that used to work with 2 NICs in the VM. One for LAN and one for DMZ. In order to get that one to work, I had to ditch the DMZ adapter and do a 1:1 NAT through the Meraki firewall. Any help or ideas would be greatly appreciated. Thanks in advance.

1727701182123.pngScreenshot 2024-09-30 085757.png
Networking.png
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!