[7.1][OVH][NETPLAN]No network into LXC containers

chencho

Well-Known Member
Nov 25, 2010
92
8
48
Hi all.

I'm from ovh, fresh install on 6.4

Then upgrade to 7.1

I need to change /etc/network/interfaces to make it run.

Now, I can install LXC containers.

I disabled all firewall (at datacenter and container), but I can't use network inside my containers.

In proxmox host, networks works fine.

Code:
root@shared1:/# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0@if15: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 02:00:00:XX:XX:XX brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 51.XXX.XXX.XXX/32 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::ff:feb4:9e6f/64 scope link
       valid_lft forever preferred_lft forever

The template I'm using is ubuntu-20.04-standard_20.04-1_amd64.tar.gz, but happens too with centos7

Code:
root@ns31:/mnt/# pveversion -v
proxmox-ve: 7.1-1 (running kernel: 5.13.19-4-pve)
pve-manager: 7.1-10 (running version: 7.1-10/6ddebafe)
pve-kernel-helper: 7.1-10
pve-kernel-5.13: 7.1-7
pve-kernel-5.4: 6.4-12
pve-kernel-5.13.19-4-pve: 5.13.19-8
pve-kernel-5.4.162-1-pve: 5.4.162-2
ceph-fuse: 14.2.21-1
corosync: 3.1.5-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown: residual config
ifupdown2: 3.1.0-1+pmx3
libjs-extjs: 7.0.0-1
libknet1: 1.22-pve2
libproxmox-acme-perl: 1.4.1
libproxmox-backup-qemu0: 1.2.0-1
libpve-access-control: 7.1-6
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.1-2
libpve-guest-common-perl: 4.0-3
libpve-http-server-perl: 4.1-1
libpve-storage-perl: 7.0-15
libqb0: 1.0.5-1
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.11-1
lxcfs: 4.0.11-pve1
novnc-pve: 1.3.0-1
proxmox-backup-client: 2.1.5-1
proxmox-backup-file-restore: 2.1.5-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.4-5
pve-cluster: 7.1-3
pve-container: 4.1-3
pve-docs: 7.1-2
pve-edk2-firmware: 3.20210831-2
pve-firewall: 4.2-5
pve-firmware: 3.3-5
pve-ha-manager: 3.3-3
pve-i18n: 2.6-2
pve-qemu-kvm: 6.1.1-1
pve-xtermjs: 4.16.0-1
pve-zsync: 2.2.1
qemu-server: 7.1-4
smartmontools: 7.2-pve2
spiceterm: 3.2-2
swtpm: 0.7.0~rc1+2
vncterm: 1.7-1
zfsutils-linux: 2.1.2-pve1

Code:
root@ns31:/mnt# pct config 100
arch: amd64
cores: 12
features: nesting=1
hostname: domain.ltd
memory: 65536
net0: name=eth0,bridge=vmbr0,gw=135.XXX.XXX.254,hwaddr=02:00:00:XX:XX:XX,ip=51.XXX.XXX.XXX/32,type=veth
ostype: ubuntu
rootfs: data0:100/vm-100-disk-0.raw,size=500G
swap: 1024
unprivileged: 1

I have others servers with Proxmox 6 into OVH and works as expect. I have my failover into server, and generated virtual mac. I cannot see any difference into proxmox 6 <> 7 containers config
 
Last edited:
After reading wiki:

You can avoid the problem by “routing” all traffic via a single interface. This makes sure that all network packets use the same MAC address.

A common scenario is that you have a public IP (assume 198.51.100.5 for this example), and an additional IP block for your VMs (203.0.113.16/28). We recommend the following setup for such situations:
auto lo
iface lo inet loopback

auto eno0
iface eno0 inet static
address 198.51.100.5/29
gateway 198.51.100.1
post-up echo 1 > /proc/sys/net/ipv4/ip_forward
post-up echo 1 > /proc/sys/net/ipv4/conf/eno1/proxy_arp


auto vmbr0
iface vmbr0 inet static
address 203.0.113.17/28
bridge-ports none
bridge-stp off
bridge-fd 0
I have set virtual Mac on ovh, not sure why is not working

My actual config:

Code:
auto lo
iface lo inet loopback

iface enp193s0f0 inet manual

iface enp133s0f0 inet manual

iface enp133s0f1 inet manual

iface enp193s0f1 inet manual

iface eth0 inet manual

auto vmbr0
iface vmbr0 inet static
        address 135.XXX.XXX.XXX
        netmask 255.255.255.0
        gateway 135.XXX.XXX.254
        bridge_ports eth0
        bridge_stp off
        bridge_fd 0

Reading wiki, if I understand it, I need to change my actual vmbr0 to eth0 config.

Then change vmbr0 to my VM address.

But... if I have 4 different VM ip's, in different range, I need to add vmbr0, vmbr1, vmbr2... ???

Ej:

Code:
auto lo
iface lo inet loopback

iface enp193s0f0 inet manual

iface enp133s0f0 inet manual

iface enp133s0f1 inet manual

iface enp193s0f1 inet manual

auto eth0
iface eth0 inet manual
        address 135.XXX.XXX.XXX/32
        gateway 135.XXX.XXX.254
       post-up echo 1 > /proc/sys/net/ipv4/ip_forward
        post-up echo 1 > /proc/sys/net/ipv4/conf/eno1/proxy_arp

auto vmbr0
iface vmbr0 inet static
        address  VM1-IP/28
        bridge-ports none
        bridge-stp off
        bridge-fd 0

auto vmbr1
iface vmbr1 inet static
        address  VM2-IP/28
        bridge-ports none
        bridge-stp off
        bridge-fd 0
...
 
Last edited:
Given your container and gateway are on different subnets and you're sure that's the correct config, then arp will have have to be functioning correctly for that to work, so that's where I'd start to look in the 1st instance
 
Last edited:
Here is my 7.1 config:

Captura de pantalla 2022-02-08 a las 20.13.20.png

Captura de pantalla 2022-02-08 a las 16.49.28.png


And this is my 6 config:

Captura de pantalla 2022-02-08 a las 20.13.26.png
Captura de pantalla 2022-02-08 a las 20.15.39.png

I dont see any difference!
 
from your /etc/network/interfaces

post-up echo 1 > /proc/sys/net/ipv4/conf/eno1/proxy_arp

should that not read 'eth0' now instead of 'eno1'
 
I dont have this line in production. I'm asking if I need to do this (changing eno1 to eth0, is true)

But then, I need to add 1 vmbrN for each CT? :(
 
I have one public ip for proxmox.

But I have n ip's (one for each CT) and I want to use them, as I use in proxmox 6 :(
 
I can't answer why it worked in proxmox 6 but fails in proxmox 7 - in theory

I notice you refer to a /28 address in your example config - is that what your provider has told you to use as a CIDR? If so, that's a 14-host subnet so I would be very surprised if they hadn't given you a gateway for that subnet. Please can you confirm?
 
Hi bobmc.

No, it's no my config, it's become from proxmox docs as example.

I have a eth0 configured as vmbr0 in proxmox.

In CT i have another ip, with it own virtual mac

In proxmox 4,5,6 this config work as expect. In proxmox 7 dont :(
 
Something estrange:

Code:
root@shared1:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
     
19: eth0@if20: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 02:00:00:91:39:7e brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 51.XXX.XXX.XXX/32 scope global eth0  ---->>> OK
       valid_lft forever preferred_lft forever
    inet 145.XXX.XXX.XXX/32 scope global eth0 ------>>> ¿?¿?¿?¿?¿?
       valid_lft forever preferred_lft forever
    inet6 fe80::ff:fe91:397e/64 scope link
       valid_lft forever preferred_lft forever

Ip 51 is correct.

I 145 dont.

My CT is ubuntu 20, but I have nothing inside /etc/netplan ! Where is the network config stored? Maybe if I can remove ip 145 it works

EDIT: Ip 145 was a previous ip t test it; after reboot I have only 51, but doesn't work...

Code:
2: eth0@if21: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 02:00:00:91:39:7e brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 51.XXX.XXX.XXX/32 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::ff:fe91:397e/64 scope link
       valid_lft forever preferred_lft forever
 
Last edited:
To be honest, I would recommend you install a VM on your host (e.g pfSense or OpnSense) to handle external routing - much easier to manage than editing config files and ip table rules IMO

Send me a pm if you like, and I'll try and help you get setup with this
 
I need to make it work with proxmox and each vm with it's ip's.

My ubuntu 20.04 Netplan config is empty.

Then:

nano /etc/netplan/01-netcfg.yaml

Code:
network:
  version: 2
  renderer: networkd
  ethernets:
          eth0:
                  dhcp4: no
                  dhcp6: no
                  addresses: [XXX.XXX.XXX.XXX/32] ---->  (ip from VM)
                  gateway4: 135.XXX.XXX.254 ---> (ip from Dedicated)
                  nameservers:
                          addresses: [208.67.222.222,208.67.220.220]
                  routes:
                  - to: 135.XXX.XXX.254 ---> (ip from Dedicated)
                    via: 0.0.0.0
                    scope: link

netplan try && netplan apply works.

But still dont have network
 
Using the OLA (Which I believe might be similar for Scale... I wish I had a client willing to pay for a proxmox cluster with scale server) I use OpenVSwitch with the LACP bond to get a bridge setup.

That set, I'm only connecting the OLA/OpenVSwitch bridges to the VRack. On the Internet/Public side, you will have some serious fun with that, and the reason I only use the VRack side of things as there I can cluster my ProxMox servers.

To make the (failover/etc.) IPs on the Public Side to work on the VMs, you do need an intermediate device/hop or need to assign the MAC assigned from OVH to that IP... I'll be seriously surprised if that doesn't work on the SCALE's public side too, but the first thing to sort out is the LACP bond, then you can add it to a bridge to have the VMs connect out.

Something that you need to understand with ProxMox7 - is the Systemd/udev stuff somewhere changed how MAC addresses are issued/allocated to the bridges, and if you had a bridge on ProxMox4/5/6 on the SCALE's public network working, but not working on ProxMox7, you will need to hardcode/force the MACs on the bridge. That is something that I expect to be fun on SCALE with OVH given how they MAC "route" traffic (or rather bound the MAC in the ARP tables and don't use ARP packets to get the IP-MAC binding.
 
Last edited:
HI i have same problem. I bougth dedicated server and +3 diferent ip and bridges but my lxc cntainers dosn't have internet conecton. Any fix , Using proxmox Virtual Environment 7.1-7
 
Best advice:
Setup a firewall, having all the IPs on it, and then have the firewall route to the LXCs on their own VLANs.

Other than that, you will have to create the virtual IP MAC for those IPs, and set the LXC's interfaces to that on the same bridge as the outside IP/interface
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!