Container and Networking issues

Christoph_

Active Member
May 5, 2017
6
0
41
36
I'm experiencing some strange issues within my containers.

Host:
Single AMD Epyc 7252, 128GB Ram, Mirror rpool, 2x Raidz1 nvme SSD Datastore
Supermicro 40GBit QSFP Nic (Intel XL710), configured with Linux Bridge

pve01 ~ # pveversion
pve-manager/7.0-13/7aa7e488 (running kernel: 5.11.22-5-pve)

pve01 ~ # pveperf
CPU BOGOMIPS: 99201.28
REGEX/SECOND: 2867823
HD SIZE: 36.59 GB (rpool/ROOT/pve-1)
FSYNCS/SECOND: 1732.53
DNS EXT: 27.43 ms

Currently running 18 Debian 11 containers, mixed priviledged/unpriviledged.

There are two issues:

First, some Containers "fai"l to autoexpand Pathes (begin to type, then TAB) for the first and second try, running in a longer timeout. After that one or two times, everything is running smoothly. Theis happens on no specific container, seems to "randomly rotate".
Anyone experienced somethind like this? I'd allready tried to higher the security/limits.conf:
Code:
*       soft    nofile  1048576
*       hard    nofile  1048576
root    soft    nofile  1048576
root    hard    nofile  1048576
*       soft    memlock unlimited
*       hard    memlock unlimited
root    soft    memlock unlimited
root    hard    memlock unlimited

Doesn't seem to help.

Second, i get networking issues. mustly notificable from Docker-Update "Failed to resolve DNS Names" mails coming in over the day ans timeouts in monitoring the containers with check_mk.
Currently there is a Linux Bridge with 4-Port Bonding in use:

Code:
auto 40g-p01
iface 40g-p01 inet manual
        mtu 9000
        up ip link set 40g-p01 txqueuelen 10000
#40GBit - Port 01

auto 40g-p02
iface 40g-p02 inet manual
        mtu 9000
        up ip link set 40g-p02 txqueuelen 10000
#40GBit - Port 02

iface enxb03af2b6059f inet manual
# IPMI Interface

auto 1g-p01
iface 1g-p01 inet manual
        mtu 9000
        up ip link set 1g-p01 txqueuelen 10000
#1GBit - Port 01

auto 1g-p02
iface 1g-p02 inet manual
        mtu 9000
        up ip link set 1g-p02 txqueuelen 10000
#1GBit - Port 02

auto bond0
iface bond0 inet manual
        bond-slaves 40g-p01 40g-p02 1g-p01 1g-p02
        bond-miimon 100
        bond-mode active-backup
        bond-primary 40g-p01
        mtu 9000
        up ip link set bond0 txqueuelen 10000
#Switch Bond

auto vmbr0
iface vmbr0 inet static
        address 172.16.0.210/24
        gateway 172.16.0.1
        bridge-ports bond0
        bridge-stp off
        bridge-fd 0
        bridge-vlan-aware yes
        bridge-pvid 1
        bridge-vids 9 10 11 15 20 21 22 30 150 151 152 200 201 301 302 901 902
        mtu 9000
        up ip link set vmbr0 txqueuelen 10000
#Bridge

Thinking about switching over to OVS, but i cant figure out where to do the "bond-primary" failback there. Any hints or other suggestions?
 
I'm experiencing some strange issues within my containers.

Host:
Single AMD Epyc 7252, 128GB Ram, Mirror rpool, 2x Raidz1 nvme SSD Datastore
Supermicro 40GBit QSFP Nic (Intel XL710), configured with Linux Bridge

pve01 ~ # pveversion
pve-manager/7.0-13/7aa7e488 (running kernel: 5.11.22-5-pve)

pve01 ~ # pveperf
CPU BOGOMIPS: 99201.28
REGEX/SECOND: 2867823
HD SIZE: 36.59 GB (rpool/ROOT/pve-1)
FSYNCS/SECOND: 1732.53
DNS EXT: 27.43 ms

Currently running 18 Debian 11 containers, mixed priviledged/unpriviledged.

There are two issues:

First, some Containers "fai"l to autoexpand Pathes (begin to type, then TAB) for the first and second try, running in a longer timeout. After that one or two times, everything is running smoothly. Theis happens on no specific container, seems to "randomly rotate".
Anyone experienced somethind like this? I'd allready tried to higher the security/limits.conf:
Code:
*       soft    nofile  1048576
*       hard    nofile  1048576
root    soft    nofile  1048576
root    hard    nofile  1048576
*       soft    memlock unlimited
*       hard    memlock unlimited
root    soft    memlock unlimited
root    hard    memlock unlimited

Doesn't seem to help.

Second, i get networking issues. mustly notificable from Docker-Update "Failed to resolve DNS Names" mails coming in over the day ans timeouts in monitoring the containers with check_mk.
Currently there is a Linux Bridge with 4-Port Bonding in use:

Code:
auto 40g-p01
iface 40g-p01 inet manual
        mtu 9000
        up ip link set 40g-p01 txqueuelen 10000
#40GBit - Port 01

auto 40g-p02
iface 40g-p02 inet manual
        mtu 9000
        up ip link set 40g-p02 txqueuelen 10000
#40GBit - Port 02

iface enxb03af2b6059f inet manual
# IPMI Interface

auto 1g-p01
iface 1g-p01 inet manual
        mtu 9000
        up ip link set 1g-p01 txqueuelen 10000
#1GBit - Port 01

auto 1g-p02
iface 1g-p02 inet manual
        mtu 9000
        up ip link set 1g-p02 txqueuelen 10000
#1GBit - Port 02

auto bond0
iface bond0 inet manual
        bond-slaves 40g-p01 40g-p02 1g-p01 1g-p02
        bond-miimon 100
        bond-mode active-backup
        bond-primary 40g-p01
        mtu 9000
        up ip link set bond0 txqueuelen 10000
#Switch Bond

auto vmbr0
iface vmbr0 inet static
        address 172.16.0.210/24
        gateway 172.16.0.1
        bridge-ports bond0
        bridge-stp off
        bridge-fd 0
        bridge-vlan-aware yes
        bridge-pvid 1
        bridge-vids 9 10 11 15 20 21 22 30 150 151 152 200 201 301 302 901 902
        mtu 9000
        up ip link set vmbr0 txqueuelen 10000
#Bridge

Thinking about switching over to OVS, but i cant figure out where to do the "bond-primary" failback there. Any hints or other suggestions?
Hey :)

Two ideas:

- You're running docker subsystem. But, your vmbr0 interface use same kind of network. Can you try to move the IP network of your ProxmoxHost and see if the problem of resolving dns is solved by this ?

- Did you desactivate IPv6 ? If not, try to stop that, reboot, and check the situation after theses two think.

Best regards,
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!