The setup is quite simple. Everything is within one node. I'm currently using SDN; one simple zone + one vNet.
In this small network, there is one firewall/dhcp (pfSense) and several guest VMs.
Target is, the dhcp assigns IP addresses to the guest VMs, so that they can ping each other in the same subnet.
In order not to cause any conflict, the vNet hasn't set subnet, gateway and IP range. The zone also has its 'automatic dhcp' disabled.
Sounds very simple, isn't it? But it indeed has given me a lot of headache because of its unpredictable outcome.
The VMs can't always get the IPs. Some of them successfully get the IPs, but fail after a restart (not always, but sometimes)
I've also tried removing the existing Network device and adding a new one to the VMs when they fail to get the IP. Then some of them can get the IPs.
It also doesn't work even if I manually set static IP addresses for the VMs.
Imagine that VM1 IP is 192.168.0.1 and VM2 IP is 192.168.0.2 with the same subnet mask and gateway but can't ping each other.
This problem never happens if I set all these VMs and pfSense connect to a Linux/OVS bridge (the physical interface).
Therefore, my verdict is that the "SDN simple zone" is unstable with random errors that make it very difficult to trace what's wrong under the hood.
I know many people use SDN with VLAN settings across multiple nodes in their production system without issues.
Not many people use SDN to set up isolated networks for testing purposes. Less people use means less impact and so I don't expect there will be a quick solution.
This is why I rather look for alternative methods.
Bridging to a physical network (Linux/OVS) is not an option for me as I don't want to mix up the production and testing environments.
In this small network, there is one firewall/dhcp (pfSense) and several guest VMs.
Target is, the dhcp assigns IP addresses to the guest VMs, so that they can ping each other in the same subnet.
In order not to cause any conflict, the vNet hasn't set subnet, gateway and IP range. The zone also has its 'automatic dhcp' disabled.
Sounds very simple, isn't it? But it indeed has given me a lot of headache because of its unpredictable outcome.
The VMs can't always get the IPs. Some of them successfully get the IPs, but fail after a restart (not always, but sometimes)
I've also tried removing the existing Network device and adding a new one to the VMs when they fail to get the IP. Then some of them can get the IPs.
It also doesn't work even if I manually set static IP addresses for the VMs.
Imagine that VM1 IP is 192.168.0.1 and VM2 IP is 192.168.0.2 with the same subnet mask and gateway but can't ping each other.
This problem never happens if I set all these VMs and pfSense connect to a Linux/OVS bridge (the physical interface).
Therefore, my verdict is that the "SDN simple zone" is unstable with random errors that make it very difficult to trace what's wrong under the hood.
I know many people use SDN with VLAN settings across multiple nodes in their production system without issues.
Not many people use SDN to set up isolated networks for testing purposes. Less people use means less impact and so I don't expect there will be a quick solution.
This is why I rather look for alternative methods.
Bridging to a physical network (Linux/OVS) is not an option for me as I don't want to mix up the production and testing environments.