Only one interface works on container

Discussion in 'Proxmox VE: Networking and Firewall' started by tripflex, Mar 13, 2017.

  1. tripflex

    tripflex New Member

    Jan 18, 2013
    Likes Received:
    Hoping someone can help me as i've been pulling out the only hair I have left (which is not much) trying to figure this out.

    I'm unable to assign multiple IPs to any CentOS 7 container, only the first adapter/IP works correctly. This only happens on RHEL (CentOS 7 specifically) containers. I do not have this issue with any Debian containers, which is what puzzles me so much about this.

    The only way I have been able to get the RHEL/CentOS containers to work with multiple IP addresses, is by using IP aliasing and manually adding the interface inside the container (as eth0:0 for example)

    I also have to manually add the route in the host for it to work:
    ip route add XXX.XXX.XXX.XXX dev vmbr0
    Why doesn't the additional network interfaces work (for bridge mode)? Why do I have to manually add the route in the host container to get it to work correctly?

    proxmox-ve: 4.4-84 (running kernel: 4.4.44-1-pve)
    pve-manager: 4.4-12 (running version: 4.4-12/e71b7a74)
    pve-kernel-4.4.35-2-pve: 4.4.35-79
    pve-kernel-4.4.44-1-pve: 4.4.44-84
    pve-kernel-4.4.19-1-pve: 4.4.19-66
    lvm2: 2.02.116-pve3
    corosync-pve: 2.4.2-2~pve4+1
    libqb0: 1.0-1
    pve-cluster: 4.0-48
    qemu-server: 4.0-109
    pve-firmware: 1.1-10
    libpve-common-perl: 4.0-92
    libpve-access-control: 4.0-23
    libpve-storage-perl: 4.0-76
    pve-libspice-server1: 0.12.8-2
    vncterm: 1.3-1
    pve-docs: 4.4-3
    pve-qemu-kvm: 2.7.1-4
    pve-container: 1.0-94
    pve-firewall: 2.0-33
    pve-ha-manager: 1.0-40
    ksm-control-daemon: 1.2-1
    glusterfs-client: 3.5.2-2+deb8u3
    lxc-pve: 2.0.7-3
    lxcfs: 2.0.6-pve1
    criu: 1.6.0-1
    novnc-pve: 0.5-8
    smartmontools: 6.5+svn4324-1~pve80
    Server is on OVH network, configured all IPs with virtual MAC in OVH manager interface.

    All debian based containers work fine with multiple IPs and network adapters, but for some reason any CentOS 7 containers will only work with the first IP added to the container. I tried using different failover IPs from other subnets, all without luck.

    I'm able to ping the main IP of the CentOS container from host node, but not able to ping any additional ones added from the host.

    Here's the `/etc/network/interfaces` file from host node:

    # The loopback network interface
    auto lo
    iface lo inet loopback
    iface eth2 inet manual
    # for Routing
    auto vmbr1
    iface vmbr1 inet manual
            post-up /etc/pve/
            bridge_ports dummy0
            bridge_stp off
            bridge_fd 0
    # vmbr0: Bridging. Make sure to use only MAC adresses that were assigned to you.
    auto vmbr0
    iface vmbr0 inet static
            address 149.x.x.155
            network 149.x.x.0
            broadcast 149.x.x.255
            gateway 149.x.x.254
            bridge_ports eth2
            bridge_stp off
            bridge_fd 0
    iface vmbr0 inet6 static
            address 2607:5300:0061:039b::
            netmask 64
            post-up /sbin/ip -f inet6 route add 2607:5300:0061:03ff:ff:ff:ff:ff dev vmbr0
            post-up /sbin/ip -f inet6 route add default via 2607:5300:0061:03ff:ff:ff:ff:ff
            pre-down /sbin/ip -f inet6 route del default via 2607:5300:0061:03ff:ff:ff:ff:ff
            pre-down /sbin/ip -f inet6 route del 2607:5300:0061:03ff:ff:ff:ff:ff dev vmbr0
    Here's the results of `ip link` in the guest (CentOS 7) container:

    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT qlen 1
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    84: eth0@if85: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT qlen 1000
        link/ether 02:00:00:xx:xx:e9 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    86: eth1@if87: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT qlen 1000
        link/ether 02:00:00:xx:xx:30 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    Here's the `/etc/sysconfig/network-scripts/ifcfg-eth0` file:
    Here's the `/etc/sysconfig/network-scripts/ifcfg-eth1` file:
    Is there something i'm missing here guys? I'm completely stumped by this one as it only appears to be an issue on RHEL (CentOS specifically) ... any suggestions, comments, or ideas to help troubleshoot this would be greatly appreciated!

    #1 tripflex, Mar 13, 2017
    Last edited: Mar 15, 2017
  2. alnork

    alnork New Member

    Oct 10, 2015
    Likes Received:
    same problem here...

    fixed it?
  3. egrueda

    egrueda Member

    Nov 9, 2010
    Likes Received:
    And one year later, I'm facing the same problem!
    It happens with centos7 template, but not with centos6 template.
    I downloaded a centos7 alternative template from and got the same result.
    Can't find what's wrong :-(
  4. atom70

    atom70 New Member

    Nov 7, 2018
    Likes Received:

    I'm also using the latest version of ProxMox.

    When I create a container inside proxmox with multiple interfaces with public IP address and their mac (OVH) only the first interface works.

    This problem only happens with Centos7 and Ubuntu, Debian works perfectly. (My templates are up to date)

    I look at the network configurations for each container created with the templates and everything is the same, it's really weird that only Debian works ...

  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice