SDN - Trying to make VMs on different hosts talk to each other

dpearceFL

Active Member
Jun 1, 2020
101
5
38
65
I have a three-node Proxmox cluster, all running 8.2.7. I would like to create two Linux VMs on different nodes in the same cluster and have them be able to talk to each other. I assume using the SDN functionality is the proper way to do this.

  1. I have created a zone with an ID of sdn100
  2. I have created a VNet using sdn100
  3. I then created a subnet off of that VNet using 10.2.0.0/24 and a gateway of 10.2.0.1 and a DHCP range of 10.2.0.10 to 10.2.0.250
The two VMs will request and are granted IP addresses with that range.

When I log into the two VMs, they can not ping the other VM.

What am I missing? Thanks!
 
You are right, I was using "simple".

I deleted the entries in Proxmox associated with my simple zone.

I have tried "VLAN Zones" but DHCP is no longer working.

Now what?
 
Correct, DHCP is only working for simple as far as I know, it will be there in the future for other zone types.

What is your goal? To test SDN or to make a VM running on different nodes?

If your goal is to have VMs communicating between eachother then you don't need SDN, you just need to configure your Proxmox nodes to access your home subnets, and all VMs should be able to talk to all others (as long as routing and firewall allows for it).

But it depends on your router. I have a pfSense as main router and I can create VLANs with DHCP enabled.

But you could also install a pfSense router as a Virtual machine, configure it to provide VLANs/DHCP and point your VMs to this router.
Many options are available.
 
What is your goal? To test SDN or to make a VM running on different nodes?
A bit of both. I'm exploring what SDN is good for.

But it depends on your router.
While I never said this, I am doing this in a corporate environment. I have three Proxmox 8.2 nodes. I am using two NICs. One NIC goes to our general network. The other NIC on all three nodes is plugged into a standalone switch. The latter network is used for all of the VMs. I was hoping to replace the network switch with SDN. I guess I will have to stand up something like dnsmasq in a VM.

Given this information, which SDN option should I use?

Thanks.
 
I'm now on 8.3.2 and SDN still seems like a technological preview. Any news on improvements?
 
We use SDN in production for our customer related zones/vlans, so I wouldn't call it a tech preview, but its certainly one of the newer features in Proxmox.

Basically anything that the PVE host needs access to (ie will have an IP) along with potentially other VMs we use a Linux bridge (we setup a dedicated bridge for each VLAN as opposed to making it 'vlan-aware').

If the networking is for VMs only then we use SDN of VLAN type and assign a Linux bridge that is vlan-aware. Some of the zones we have are Customer, DMZ, Lab, etc. Inside each of those respective zones we setup the VNets with the VLAN config. Then in each VM we simply select the appropriate VNet.

This is helpful in a number of ways not limited to:

1) Since you can be more descriptive when creating Zones/VNets its easier to identify at a glance what VNet you want to use, as opposed picking a specific vmbr or typing in the VLAN id. Not a big deal when dealing with a handful of VMs, but when you start managing a lot of different VMs for potentially different clients/teams it can make provisioning/automation a lot easier.

2) SDNs can have permisions applied. This way we can limit what certain user/api accounts can have access to and keep certain SDNs from being used (eg DMZ).

3) Being able to create SDNs at the datacenter level makes it much easier to roll out a new Zone/VNet across a cluster of PVE hosts and keep things organized.

There are other benifits as well I'm sure I'm missing, but those are top of mind why we implemented SDN both internally and for our customers.
 
Last edited:
If the OP want cross node VM communication on a separate private network range then SDN -> Zones -> VXLAN will work but there is (currently, apparently) no DHCP + IPAM + DNS facility available.

I just set up a Simple network and have DHCP and DNS working via dnsmasq, pve IPAM and PowerDNS for DNS so that when I create a VM/CT with DHCP for networking, the host and IP A record get inserted into the pdns sqlite database successfully. However, there seems to be a bug where the reverse PTR record wants to go to the wrong in-addr.arpa zone, so I am still working on that... but it ALMOST works.

Now, what is super weird is that I can allocate VM/CTs on other nodes and it still works! I thought a Simple network with DHCP etc would only work on one host node. As it turns out I noticed that my previous test using VXLAN just happens to be using the same private network range so maybe, just maybe, it is possible to have DHCP allocation of VM/CTs AND also be able to do it across host nodes. Except for my PTR glitch, I am doing this right now... most likely as a happy accident by forgetting that I was using the same network range to test VXLAN then added the Simple network with the same range.