Inconsistent behavior between ubuntu VM networks

Jlourenco

New Member
Apr 17, 2020
22
0
1
27
Hello,

I'm having a wierd network issue that I don't seem to find a reason but would greatly appreciate some help here.
I have a 2 node cluster connected with a physical switch, one of the NIC's on each not is just for internal communication.

Host01 interface:
iface enp1s0 inet manual
auto vmbr1
iface vmbr1 inet static
address 10.10.0.2/8
bridge-ports enp1s0
bridge-stp off
bridge-fd 0
post-up echo 1 > /proc/sys/net/ipv4/ip_forward
post-up iptables -t nat -A POSTROUTING -s '10.10.0.0/24' -o enp35s0 -j MASQUERADE
post-down iptables -t nat -D POSTROUTING -s '10.10.0.0/24' -o enp35s0 -j MASQUERADE
post-up iptables -t nat -A POSTROUTING -s '10.0.1.0/24' -o enp35s0 -j MASQUERADE
post-down iptables -t nat -D POSTROUTING -s '10.0.1.0/24' -o enp35s0 -j MASQUERADE
post-up iptables -t nat -A POSTROUTING -s '10.0.2.0/24' -o enp35s0 -j MASQUERADE
post-down iptables -t nat -D POSTROUTING -s '10.0.2.0/24' -o enp35s0 -j MASQUERADE
post-up iptables -t nat -A POSTROUTING -s '10.0.10.0/24' -o enp35s0 -j MASQUERADE
post-down iptables -t nat -D POSTROUTING -s '10.0.10.0/24' -o enp35s0 -j MASQUERADE

Host02 interface:
iface enp33s0 inet manual
auto vmbr1
iface vmbr1 inet static
address 10.10.0.3/8
bridge-ports enp33s0
bridge-stp off
bridge-fd 0
post-up echo 1 > /proc/sys/net/ipv4/ip_forward
post-up iptables -t nat -A POSTROUTING -s '10.10.0.0/24' -o vmbr0 -j MASQUERADE
post-down iptables -t nat -D POSTROUTING -s '10.10.0.0/24' -o vmbr0 -j MASQUERADE
post-up iptables -t nat -A POSTROUTING -s '10.0.1.0/24' -o vmbr0 -j MASQUERADE
post-down iptables -t nat -D POSTROUTING -s '10.0.1.0/24' -o vmbr0 -j MASQUERADE
post-up iptables -t nat -A POSTROUTING -s '10.0.2.0/24' -o vmbr0 -j MASQUERADE
post-down iptables -t nat -D POSTROUTING -s '10.0.2.0/24' -o vmbr0 -j MASQUERADE
post-up iptables -t nat -A POSTROUTING -s '10.0.10.0/24' -o vmbr0 -j MASQUERADE
post-down iptables -t nat -D POSTROUTING -s '10.0.10.0/24' -o vmbr0 -j MASQUERADE

My goal with these interfaces was to create an internal interface to add to the vm's to have internal IP's and with the nat they can reach the internet without being reached from it.

I then started testing out using VM's with ubuntu 21.04 desktop in which I applied the following configs to test:
1625650102961.png
and
1625650131513.png

This two VM's are each in a different proxmox node and still they can ping each other, I then moved on to having the same config but with a ubuntu focal with cloudinit.
Here are my configurations :
1625650242018.png

which in the VM results in:
1625650306471.png

The VM with cloudinit, when I try to ping from it I get back "Network is unreachable" and from other hosts, the ping doesn't work to this VM either.

Since my configs are the same on both my desktop and cloudinit based VM's should this be working? what could I be missing here?


Thanks.
 

Attachments

  • 1625650129220.png
    1625650129220.png
    44.1 KB · Views: 3
Please provide the output of ip -details a from the cloud-init VM.
 
Is the gateway set correctly? ip r
 
1625654502480.png

I'm not sure how to interpret these, sry I'm not a network guy :D but my gateway should be 10.10.0.2
 
Yes, this seems to be https://bugzilla.proxmox.com/show_bug.cgi?id=1838
Ubuntu with netplan handles this terribly and requires a workaround which only works with network config v2 and we still use v1.
As a workaround you can use a custom network snippet [0] with the network config v2 layout [1] and add 'onlink: true' to the interface config.


[0] https://pve.proxmox.com/pve-docs-6/pve-admin-guide.html#qm_cloud_init (10.8.3)
[1] https://cloudinit.readthedocs.io/en/latest/topics/network-config-format-v2.html
 
Uf... okay, you seem to know what your talking about, which is good but can you please put that in simpler words? :rolleyes:

I understand that this can be a network config version issue but I didn't quite understand how to get around it.

Thanks
 
Hey, so I followed both of these with no success at all :/
here is what I did...

snippets/network_133.yml
version: 2
ethernets:
eth0:
match:
macaddress: '82:67:60:f9:44:35'
dhcp4: true
addresses:
- 10.0.2.3/27
gateway4: 10.10.0.2
nameservers:
addresses: [8.8.8.8]

Also tried with:
version: 2
ethernets:
eth0:
match:
macaddress: '82:67:60:f9:44:35'
dhcp4: true
addresses:
- 10.0.2.3/27
gateway4: 10.10.0.2
nameservers:
addresses: [8.8.8.8]
routes:
- to: 0.0.0.0/0
via: 10.10.0.2
metric: 3

Then:
qm set 133 -cicustom network=local:snippets/network_133.yml

I restared the VM, I know the settings were applied because I tried with a different IP on the file and I saw it change on the VM, but still the VM is still not reaching the private network :/

Any other ideas?
Did I miss something here?
 
I noticed a new version of pmox was released yesterday, tried to upgrade to it and the issue seems to be present still. Any clues on what could be wrong here?
 
I'v re-read your post and I noticed I missed the 'and add 'onlink: true' to the interface config.' but I'm not understand how nor where I need to add this, I tried to add it to the proxmox network device, it removed the device from the VM so I guess it doesn't exist.
I tried to add it to the custom cloud init config, which also resulted in the vm to not be able to process the cloud init.
I also tried to add it directly in the vm to the netplan which returned an error on reload saying it did't exist the term.

I'm out of ideas to where to put that :/
 
Still couldn't find a fix...
For now what I did was spam the vmbr with IP's for each of the networks to be used as gateways, then on the VM's I use the specific GW.

iface enp1s0 inet manual
auto vmbr1
iface vmbr1 inet static
address 10.10.0.2/24
address 10.0.1.2/24
address 10.0.2.2/24
address 10.0.10.2/24
bridge-ports enp1s0
bridge-stp off
bridge-fd 0
post-up echo 1 > /proc/sys/net/ipv4/ip_forward
post-up iptables -t nat -A POSTROUTING -s '10.10.0.0/24' -o enp35s0 -j MASQUERADE
post-down iptables -t nat -D POSTROUTING -s '10.10.0.0/24' -o enp35s0 -j MASQUERADE
post-up iptables -t nat -A POSTROUTING -s '10.0.1.0/24' -o enp35s0 -j MASQUERADE
post-down iptables -t nat -D POSTROUTING -s '10.0.1.0/24' -o enp35s0 -j MASQUERADE
post-up iptables -t nat -A POSTROUTING -s '10.0.2.0/24' -o enp35s0 -j MASQUERADE
post-down iptables -t nat -D POSTROUTING -s '10.0.2.0/24' -o enp35s0 -j MASQUERADE
 
In your second config you can add the 'on-link: true' option to the route. See https://netplan.io/examples/#reaching-a-directly-connected-gateway
This works because the network config v2 is basically the same as the netplan config and is passed through directly. This workaround is also only needed on Ubuntu, on Fedora/CentOS/RHEL it works out of the box.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!