Network hangs on load

mcbarlo

New Member
Oct 10, 2015
10
0
1
I have cluster with three servers with Ceph storage. All nic's are Intel x710 10G. On Ceph network everything is ok, but on nic to the Internet I have issue. When traffic load is about 1.5-2 Gbps per node network hangs. I must disconect and connect again vm virtual nic. This solve problem sometime for few hours sometime for few minutes.

All guest systems are Debian 9 and virtual nic's are virtio.

Could you help me debug this problem?

Code:
proxmox-ve: 5.0-15 (running kernel: 4.10.15-1-pve)
pve-manager: 5.0-23 (running version: 5.0-23/af4267bf)
pve-kernel-4.10.15-1-pve: 4.10.15-15
libpve-http-server-perl: 2.0-5
lvm2: 2.02.168-pve2
corosync: 2.4.2-pve3
libqb0: 1.0.1-1
pve-cluster: 5.0-10
qemu-server: 5.0-12
pve-firmware: 2.0-2
libpve-common-perl: 5.0-16
libpve-guest-common-perl: 2.0-11
libpve-access-control: 5.0-5
libpve-storage-perl: 5.0-12
pve-libspice-server1: 0.12.8-3
vncterm: 1.5-2
pve-docs: 5.0-6
pve-qemu-kvm: 2.9.0-2
pve-container: 2.0-14
pve-firewall: 3.0-1
pve-ha-manager: 2.0-2
ksm-control-daemon: 1.2-2
glusterfs-client: 3.8.8-1
lxc-pve: 2.0.8-3
lxcfs: 2.0.7-pve2
criu: 2.11.1-1~bpo90
novnc-pve: 0.6-4
smartmontools: 6.5+svn4324-1
zfsutils-linux: 0.6.5.9-pve16~bpo90
openvswitch-switch: 2.6.2~pre+git20161223-3
ceph: 12.1.0-pve2
 
Hi,

you have to provide more information about your network.
please send the network config.
 
Ok, here is interfaces config:

Code:
auto lo
iface lo inet loopback

allow-vmbr0 ens2f0
iface ens2f0 inet manual
    ovs_type OVSPort
    ovs_bridge vmbr0

iface ens9f0 inet manual

iface ens9f1 inet manual

iface ens9f2 inet manual

iface ens9f3 inet manual

allow-vmbr1 ens2f1
iface ens2f1 inet manual
    ovs_type OVSPort
    ovs_bridge vmbr1

allow-vmbr1 ens2f2
iface ens2f2 inet manual
    ovs_type OVSPort
    ovs_bridge vmbr1

iface ens2f3 inet manual

auto vmbr1
iface vmbr1 inet static
    address  10.0.0.200
    netmask  255.255.255.0
    ovs_type OVSBridge
    ovs_ports ens2f1 ens2f2
    pre-up ip link set dev ens2f1 mtu 9000; ip link set dev ens2f2 mtu 9000
    up ovs-vsctl set Bridge ${IFACE} rstp_enable=true

auto vmbr0
iface vmbr0 inet static
    address  xx.xx.128.200
    netmask  255.255.255.192
    gateway  xx.xx.128.193
    ovs_type OVSBridge
    ovs_ports ens2f0
 
I tryed differend kernel versions on guest and Linux bridge insted of OVS. Unfortunatally nothing helps.

Can I use another virtual nic to achive transfers more than 1 Gbps? May be this is problem only virtio driver?
 
Try to use instead OVS a plain Linux bridge for the internet network.
 
I tryed, this nothing change. I think about passthrough nic into vm but this is not solution.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!