LXC multiple NIC's accessibility issue

Discussion in 'Proxmox VE: Networking and Firewall' started by PretoX, Jul 7, 2016.

Tags:
  1. PretoX

    PretoX Member

    Joined:
    Apr 5, 2016
    Messages:
    44
    Likes Received:
    8
    # pveversion -v
    proxmox-ve: 4.2-56 (running kernel: 4.4.13-1-pve)
    pve-manager: 4.2-15 (running version: 4.2-15/6669ad2c)
    pve-kernel-4.4.6-1-pve: 4.4.6-48
    pve-kernel-4.4.13-1-pve: 4.4.13-56
    pve-kernel-4.2.6-1-pve: 4.2.6-36
    pve-kernel-4.2.8-1-pve: 4.2.8-41
    pve-kernel-4.4.10-1-pve: 4.4.10-54
    lvm2: 2.02.116-pve2
    corosync-pve: 2.3.5-2
    libqb0: 1.0-1
    pve-cluster: 4.0-42
    qemu-server: 4.0-83
    pve-firmware: 1.1-8
    libpve-common-perl: 4.0-70
    libpve-access-control: 4.0-16
    libpve-storage-perl: 4.0-55
    pve-libspice-server1: 0.12.5-2
    vncterm: 1.2-1
    pve-qemu-kvm: 2.5-19
    pve-container: 1.0-70
    pve-firewall: 2.0-29
    pve-ha-manager: 1.0-32
    ksm-control-daemon: 1.2-1
    glusterfs-client: 3.5.2-2+deb8u2
    lxc-pve: 1.1.5-7
    lxcfs: 2.0.0-pve2
    cgmanager: 0.39-pve1
    criu: 1.6.0-1
    zfsutils: 0.6.5.7-pve10~bpo80

    # cat /etc/pve/lxc/105.conf
    arch: amd64
    cpulimit: 6
    cpuunits: 1024
    hostname: ps3
    memory: 20480
    mp0: VM-DATA:subvol-105-disk-2,acl=0,mp=/DATA,size=250G
    nameserver: 8.8.8.8 8.8.4.4
    net0: bridge=vmbr0,gw=ip.ip.ip.ipGW,hwaddr=36:63:36:63:37:61,ip=ip.ip.ip.ip1/24,name=eth0,type=veth
    net1: bridge=vmbr0,hwaddr=36:38:32:66:64:35,ip=localip/24,name=eth1,type=veth
    net2: name=eth2,bridge=vmbr0,hwaddr=66:34:65:62:30:30,ip=ip.ip.ip.ip2/24,type=veth
    net3: name=eth3,bridge=vmbr0,hwaddr=62:65:65:37:34:30,ip=ip.ip.ip.ip3/24,type=veth
    net4: name=eth4,bridge=vmbr0,hwaddr=36:66:36:32:62:32,ip=ip.ip.ip.ip4/24,type=veth
    net5: name=eth5,bridge=vmbr0,hwaddr=3A:33:64:65:35:66,ip=ip.ip.ip.ip5/24,type=veth
    onboot: 1
    ostype: centos
    parent: network
    protection: 1
    rootfs: VM-DATA:subvol-105-disk-1,acl=0,size=100G
    searchdomain: ingenuity.net.au
    snaptime: 1467632550
    swap: 10240

    [network]
    #some interfaces became UP again?!&! why? how?
    arch: amd64
    cpulimit: 6
    cpuunits: 1024
    hostname: ps3.ingenuity.net.au
    memory: 20480
    mp0: VM-DATA:subvol-105-disk-2,acl=0,mp=/DATA,size=250G
    nameserver: 8.8.8.8 8.8.4.4
    net0: bridge=vmbr0,gw=ip.ip.ip.ipGW,hwaddr=36:63:36:63:37:61,ip=ip.ip.ip.ip1/24,name=eth0,type=veth
    net1: bridge=vmbr0,hwaddr=36:38:32:66:64:35,ip=localip/24,name=eth1,type=veth
    net2: name=eth2,bridge=vmbr0,hwaddr=66:34:65:62:30:30,ip=ip.ip.ip.ip2/24,type=veth
    net3: name=eth3,bridge=vmbr0,hwaddr=62:65:65:37:34:30,ip=ip.ip.ip.ip3/24,type=veth
    net4: name=eth4,bridge=vmbr0,hwaddr=36:66:36:32:62:32,ip=ip.ip.ip.ip4/24,type=veth
    net5: name=eth5,bridge=vmbr0,hwaddr=3A:33:64:65:35:66,ip=ip.ip.ip.ip5/24,type=veth
    onboot: 1
    ostype: centos
    protection: 1
    rootfs: VM-DATA:subvol-105-disk-1,acl=0,size=100G
    searchdomain: ingenuity.net.au
    snaptime: 1467803132
    swap: 10240

    The issue is: after adding new NIC to LXC or rebooting LXC all interfaces except eth0 stop pinging outside /24 network BUT they start pinging in several hours one by one, so wait ~2hours eth2 starts pinging, another 2 hours - eth3 up etc.
    This is a big issue please help. Logs are empty by the time when interfaces go UP.
    This is happening only with LXC's created before pve updates. All new LXC's and nic's added to them work great.
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
    luison and chris121212 like this.
  2. Richard

    Richard Proxmox Staff Member
    Staff Member

    Joined:
    Mar 6, 2015
    Messages:
    665
    Likes Received:
    23
    What does "stop pinging" mean exactly? The interfaces are not active or just that connection to/from destinations outside own subnet (=routed connections) are not possible?
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  3. PretoX

    PretoX Member

    Joined:
    Apr 5, 2016
    Messages:
    44
    Likes Received:
    8
    Interfaces are active. IP's are available only from /24 network. All connections outside /24 network are unavailable, like bridge is broken
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  4. Richard

    Richard Proxmox Staff Member
    Staff Member

    Joined:
    Mar 6, 2015
    Messages:
    665
    Likes Received:
    23
    Sounds rather like a problem with routing than with bridging. Check if the routing tables are correct. Or is there a firewall active?

    If all this seems to be correct follow the packets with tcpdump.
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  5. chris121212

    chris121212 New Member

    Joined:
    May 26, 2016
    Messages:
    24
    Likes Received:
    0
    Got a similar problem.
    Converted Container from OpenVZ (Ubuntu 12 and 14) has the problem that the second ip address is not reachable from outside. Only one interface requests to pings etc.

    New created containers are working perfect with 2 or more interfaces. How can i check my routing?
     
  6. chris121212

    chris121212 New Member

    Joined:
    May 26, 2016
    Messages:
    24
    Likes Received:
    0
  7. PretoX

    PretoX Member

    Joined:
    Apr 5, 2016
    Messages:
    44
    Likes Received:
    8
    An answer:

    net0,1,2,3,4 are all in the same subnet, from the networks point of view (all connected with vmbr0) as well as from the address definition view (all have an IP address in xxx.xxx.xxx.xxx/24).

    There is no need to define 4 virtual NICs for that, define only net0 and assign the additional addresses as alias, like:
    eth0:131 xxxx
    eth0:132 xxxx
    eth0:133 xxxx
    eth0:134 xxxx

    This configuration cannot be made in the container definition but can be made easily in the containers start-up script ("post-up" entry in /etc/network/interfaces).

    This means proxmox containers still have limitations. Hope this will help anyone )
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  8. chris121212

    chris121212 New Member

    Joined:
    May 26, 2016
    Messages:
    24
    Likes Received:
    0
    Can u post ur interfaces inside the container? Thank you
     
  9. PretoX

    PretoX Member

    Joined:
    Apr 5, 2016
    Messages:
    44
    Likes Received:
    8
    that's the network part

    net0: name=eth0,bridge=vmbr0,gw=2xx.xxx.xxx.xx3,hwaddr=F2:CE:FB:53:D8:E4,ip=2xx.xxx.xxx.130/24,type=veth
    net5: name=eth5,bridge=vmbr0,hwaddr=fe:f4:93:42:2e:8a,ip=192.168.xxx.130/24,type=veth
    onboot: 1
    ostype: centos

    I have centos, I think you have ubuntu so you have to create executable file here: /etc/network/if-up.d
    with something about creating aliases on eth0 up

    For now I have 5 ip's running on eth0
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
    luison likes this.
  10. tripflex

    tripflex New Member

    Joined:
    Jan 18, 2013
    Messages:
    10
    Likes Received:
    1
    Anybody that comes across this thread having an issue with adding an additional failover IP from OVH in a CentOS container, the only solution I found was to set the virtual MAC in the OVH manager for the additional VM IP to the SAME MAC address as the main IP of the container, after doing that I was able to get it to ping from internet -> VM for the additional IP. Just wanted to post this here in case someone else has this issue, as I spent a TON of time trying to figure it out :(
     
    luison likes this.
  11. Sebastian's

    Sebastian's New Member

    Joined:
    Nov 26, 2018
    Messages:
    7
    Likes Received:
    0
    God bless you men. This works only in this way. Thx
     
  12. Simon Mott

    Simon Mott Member

    Joined:
    May 1, 2016
    Messages:
    39
    Likes Received:
    0
    Check out sysctl options rp_filter and arp_ignore.

    I had problems a while back with two separate interfaces, both in the same broadcast domain - it'd flip flop between them because of a combination of these two settings

    I have a more detailed write up for this on my site, but i dont want to be just posting links to it with every post on this forum :D

    TLDR

    Set rp_filter to 2 (Loose mode) or 0 (off)

    configuring arp_ignore properly was optional for me, but i didnt need it replying out of both interfaces so in my case i set it to 2. Detailed explanation of these settings can be found on kernel.org (or google my name and look for rp_filter :))
     
  13. luison

    luison Member

    Joined:
    Feb 22, 2010
    Messages:
    60
    Likes Received:
    1
    Hi all. We have a proxy server with a few IPs from OVH pointing to it as per the recommended IP configuration for PVE/OVH... this is, virtual MAC addresses for each created and all their gateways pointing to the same gateway of the host.

    upload_2019-2-26_17-55-39.png

    This was working on PVE 4 and also after upgrade to 5. We now updated the container to Debian 9 and when we tried to turn this on and restore this same config (actually it worked and the failed after a restart) we started getting issues with "some" of the ips... service restart problems, no access from host, etc

    We've rechecked configuration and these ended up bringing me here. I understand by this and another post from @PretoX suggesting to create the additional IPs on the container as aliases of the first instead of the "documented" way but I've also read that this should still be working on Debian 9.

    • Anyone has this setup working on OVH and Debian 9 container?

    • Should we just "remove" the gateway from the additional IPs? Would that mean any originator IP change?

    • Should we use the "virtual" IP way? If so, is this definable on the PVE interface at all? If not, does PVE not remove the interfaces config file whenever you adjust network in the panel?
    Many thanks.

    JL


    ----------------
    Further to this I've also been trying from another feed to add the interfaces via command line to catch the reply.

    # pct set 1192 -net2 bridge=vmbr0,hwaddr=02:00:00:XX:XX:XX,name=eth2,type=veth,ip=94.xxx.xxx.78/32,gw=51.xx.xx.254​
    RTNETLINK answers: File exists
    command 'lxc-attach -n 1192 -s NETWORK -- /sbin/ip -4 route add 51.xx.xx.254 dev eth2' failed: exit code 2

    Altough the error the interface is showing now in the web interface (lxc conf) but not on the container inferfaces file.

    Doing the same command "without" a gateway causes no error but I can not get to access that new IP from the host. So option 2 "remove gateway" I understand is not working in this case.
     
    #13 luison, Feb 26, 2019
    Last edited: Feb 26, 2019
  14. luison

    luison Member

    Joined:
    Feb 22, 2010
    Messages:
    60
    Likes Received:
    1
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice