Netzwerk zu Langsam

fwinkler

Member
Dec 12, 2021
38
2
13
51
Hallo,

ich habe mit iperf3 mal das Netzwerk für das Ceph getestet. Dabei komme ich kau über 50Gbit bei einer 100Gbit Karte.
Ist das nicht etwas langsam?

Netzwerkkarte:

BCM57508 mit 2 x 100G

Switch:

S5860-48SC

Code:
root@pve1:~# iperf3 -c 192.168.0.2 -t 3600 C-m
Connecting to host 192.168.0.2, port 5201
[  5] local 192.168.0.1 port 55766 connected to 192.168.0.2 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec  3.79 GBytes  32.6 Gbits/sec   18    874 KBytes
[  5]   1.00-2.00   sec  2.65 GBytes  22.8 Gbits/sec    0    874 KBytes
[  5]   2.00-3.00   sec  3.18 GBytes  27.3 Gbits/sec    0    900 KBytes
[  5]   3.00-4.00   sec  2.17 GBytes  18.6 Gbits/sec    0    900 KBytes
[  5]   4.00-5.00   sec  2.17 GBytes  18.6 Gbits/sec    0    900 KBytes
[  5]   5.00-6.00   sec  2.18 GBytes  18.7 Gbits/sec    0    900 KBytes
[  5]   6.00-7.00   sec  4.04 GBytes  34.7 Gbits/sec    7    909 KBytes
[  5]   7.00-8.00   sec  2.57 GBytes  22.1 Gbits/sec    7    743 KBytes
[  5]   8.00-9.00   sec  3.14 GBytes  26.9 Gbits/sec   18    848 KBytes
[  5]   9.00-10.00  sec  2.13 GBytes  18.3 Gbits/sec    0    848 KBytes
[  5]  10.00-11.00  sec  3.60 GBytes  30.9 Gbits/sec    4    813 KBytes
[  5]  11.00-12.00  sec  2.90 GBytes  24.9 Gbits/sec    0    865 KBytes
[  5]  12.00-13.00  sec  3.51 GBytes  30.1 Gbits/sec   29    813 KBytes
[  5]  13.00-14.00  sec  2.94 GBytes  25.2 Gbits/sec    3    577 KBytes
[  5]  14.00-15.00  sec  2.79 GBytes  23.9 Gbits/sec    9    856 KBytes
[  5]  15.00-16.00  sec  2.30 GBytes  19.8 Gbits/sec    0    856 KBytes
[  5]  16.00-17.00  sec  5.47 GBytes  47.0 Gbits/sec    0    856 KBytes
[  5]  17.00-18.00  sec  5.47 GBytes  47.0 Gbits/sec    0    856 KBytes
[  5]  18.00-19.00  sec  5.44 GBytes  46.7 Gbits/sec    0    865 KBytes
[  5]  19.00-20.00  sec  5.47 GBytes  46.9 Gbits/sec    0    935 KBytes
[  5]  20.00-21.00  sec  5.46 GBytes  46.9 Gbits/sec    0    935 KBytes
[  5]  21.00-22.00  sec  5.41 GBytes  46.5 Gbits/sec    0   1014 KBytes
[  5]  22.00-23.00  sec  5.48 GBytes  47.1 Gbits/sec    0   1014 KBytes
[  5]  23.00-24.00  sec  5.80 GBytes  49.8 Gbits/sec    0   1.50 MBytes
[  5]  24.00-25.00  sec  6.26 GBytes  53.7 Gbits/sec    0   1.97 MBytes
[  5]  25.00-26.00  sec  4.94 GBytes  42.4 Gbits/sec    0   1.97 MBytes
[  5]  26.00-27.00  sec  3.71 GBytes  31.9 Gbits/sec    0   1.97 MBytes
[  5]  27.00-28.00  sec  3.55 GBytes  30.5 Gbits/sec    0   1.97 MBytes
[  5]  28.00-29.00  sec  3.41 GBytes  29.3 Gbits/sec    2   1.37 MBytes
[  5]  29.00-30.00  sec  3.90 GBytes  33.5 Gbits/sec    0   1.72 MBytes
[  5]  30.00-31.00  sec  5.88 GBytes  50.5 Gbits/sec    0   1.75 MBytes
[  5]  31.00-32.00  sec  2.97 GBytes  25.5 Gbits/sec    5   1.22 MBytes
^C[  5]  32.00-32.15  sec   334 MBytes  19.0 Gbits/sec    0   1.22 MBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-32.15  sec   125 GBytes  33.4 Gbits/sec  102             sender
[  5]   0.00-32.15  sec  0.00 Bytes  0.00 bits/sec                  receiver


Code:
ethtool eno12399np0
Settings for eno12399np0:
        Supported ports: [ FIBRE ]
        Supported link modes:   25000baseCR/Full
                                50000baseCR2/Full
                                100000baseCR4/Full
                                50000baseCR/Full
                                100000baseCR2/Full
                                200000baseCR4/Full
        Supported pause frame use: Symmetric Receive-only
        Supports auto-negotiation: Yes
        Supported FEC modes: RS  BASER   LLRS
        Advertised link modes:  25000baseCR/Full
                                50000baseCR2/Full
                                100000baseCR4/Full
                                50000baseCR/Full
                                100000baseCR2/Full
                                200000baseCR4/Full
        Advertised pause frame use: No
        Advertised auto-negotiation: Yes
        Advertised FEC modes: Not reported
        Speed: 100000Mb/s
        Lanes: 4
        Duplex: Full
        Auto-negotiation: on
        Port: Direct Attach Copper
        PHYAD: 0
        Transceiver: internal
        Supports Wake-on: g
        Wake-on: d
        Current message level: 0x00002081 (8321)
                               drv tx_err hw
        Link detected: yes

Ich weis nicht was da zu erwarten ist.

Kann mir da jemand eine Auskunft geben?
 
Wer ist denn die Gegenseite?
Ohne Jumbo Frames kannst du bis zu 92 GBIt locker erwarten und mit Jumbo 96-98 GBit.

Der Switch kann das auch ohne Probleme, aber ich nutze meistens die S5850 BC Modelle mit 25 GBit Ports. ;)
 
Die Gegenseite ist ein zweiter Proxmox Server, gleich Netzwerkkarte.

Die Mtu ist auf 9000.

Code:
root@pve3:~# cat /etc/network/interfaces
# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage parts of the network configuration manually,
# please utilize the 'source' or 'source-directory' directives to do
# so.
# PVE will preserve these directives, but will NOT read its network
# configuration from sourced files, so do not attempt to move any of
# the PVE managed interfaces into external files!

auto lo
iface lo inet loopback

auto ens1f0np0
iface ens1f0np0 inet static
        address 10.144.188.12/24
        gateway 10.144.188.1
#MGMT 2

auto eno8303
iface eno8303 inet static
        address 172.16.1.3/28
#Cluster Netz

iface eno8403 inet manual

iface ens1f1np1 inet manual
#plano

auto ens1f2np2
iface ens1f2np2 inet manual
#VM Netz

auto ens1f3np3
iface ens1f3np3 inet manual
#VM Netz

auto eno12399np0
iface eno12399np0 inet manual
        mtu 9000
#100GB

auto eno12409np1
iface eno12409np1 inet manual
#100GB

auto bond0
iface bond0 inet manual
        bond-slaves ens1f2np2 ens1f3np3
        bond-miimon 100
        bond-mode 802.3ad
        bond-xmit-hash-policy layer2
#VM Netz

auto vmbr1
iface vmbr1 inet static
        address 10.144.189.12/24
        bridge-ports bond0
        bridge-stp off
        bridge-fd 0
        bridge-vlan-aware yes
        bridge-vids 2-4094
#VM Netz

auto vmbr2
iface vmbr2 inet static
        address 192.168.240.4/24
        bridge-ports ens1f1np1.12
        bridge-stp off
        bridge-fd 0
        bridge-vlan-aware yes
        bridge-vids 2-4094
        up ip route add 10.128.0.0/16 via 192.168.240
#plano

auto vmbr3
iface vmbr3 inet static
        address 192.168.0.3/24
        bridge-ports eno12399np0
        bridge-stp off
        bridge-fd 0
        mtu 9000
#Ceph Netz VMBridge

source /etc/network/interfaces.d/*

Hier der Zweite Proxmox:

Code:
root@pve2:~# cat /etc/network/interfaces
# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage parts of the network configuration manually,
# please utilize the 'source' or 'source-directory' directives to do
# so.
# PVE will preserve these directives, but will NOT read its network
# configuration from sourced files, so do not attempt to move any of
# the PVE managed interfaces into external files!

auto lo
iface lo inet loopback

auto ens1f0np0
iface ens1f0np0 inet static
        address 10.144.188.11/24
        gateway 10.144.188.1
#MGMT

auto eno8303
iface eno8303 inet static
        address 172.16.1.2/28
#Cluster Netz

iface eno8403 inet manual
#frei

auto ens1f1np1
iface ens1f1np1 inet manual
#plano

auto ens1f2np2
iface ens1f2np2 inet manual
#VM Netz

auto ens1f3np3
iface ens1f3np3 inet manual
#VM Netz

auto eno12399np0
iface eno12399np0 inet manual
        mtu 9000
#100GB

auto eno12409np1
iface eno12409np1 inet manual
        mtu 9000
#100GB

auto bond0
iface bond0 inet manual
        bond-slaves ens1f2np2 ens1f3np3
        bond-miimon 100
        bond-mode 802.3ad
        bond-xmit-hash-policy layer2
#VM Netz

auto vmbr1
iface vmbr1 inet static
        address 10.144.189.11/24
        bridge-ports bond0
        bridge-stp off
        bridge-fd 0
        bridge-vlan-aware yes
        bridge-vids 2-4094
#VM Netz

auto vmbr2
iface vmbr2 inet static
        address 192.168.240.3/24
        bridge-ports ens1f1np1.12
        bridge-stp off
        bridge-fd 0
        bridge-vlan-aware yes
        bridge-vids 2-4094
        up ip route add 10.128.0.0/16 via 192.168.240.1
#plano

auto vmbr3
iface vmbr3 inet static
        address 192.168.0.2/24
        bridge-ports eno12399np0
        bridge-stp off
        bridge-fd 0
        mtu 9000
#Ceph Netz VMBridge

source /etc/network/interfaces.d/*
 
Alles Klar.
Bitte für Ceph keine VM Bridge nutzen. Entweder der NIC direkt die IP geben oder ein Layer 3+4 LACP bond bauen.
 
Die VM Bridge ist für Kubernetes die mit den ceph-csi Treibern so auf die Monitore zugreifen.

Layer 3+4 LACP bond mit einem Switch? Es kommt später noch ein zweiter Identischer Switch dazu den ich aber noch nicht habe.
Der Switch hat auch nur 8 x 100GB und es sind 5 Nodes

Ich brauche auch noch ein radosgw wo ich mir auch noch nicht sicher bin wie ich das Netzwerk da aussehen sollte.

Wir haben 5 Proxmox Server mit je:
2 x 1GB
4x 10 GB
2 x 100 GB

und einen Backupserver
mit 2 x 1GB und 2 x 10GB


Wobei ich zu Zeit noch eine Nic in die "alte Infrastruktur" brauche, bevorzugt ein von den 10GB Nic's.

Hast du eine Vorschlag wie das Netzwerk geändert werden sollte?



hier auch noch mal die Ceph.config:

Code:
root@pve1:~# cat /etc/ceph/ceph.conf
[global]
        auth_client_required = cephx
        auth_cluster_required = cephx
        auth_service_required = cephx
        cluster_network = 192.168.0.1/24
        fsid = 0ea316e3-fe45-41d7-81d4-8751e5f402cf
        mon_allow_pool_delete = true
        mon_host = 192.168.0.2 192.168.0.1 192.168.0.3
        ms_bind_ipv4 = true
        ms_bind_ipv6 = false
        osd_pool_default_min_size = 2
        osd_pool_default_size = 3
        public_network = 192.168.0.0/24

[client]
        keyring = /etc/pve/priv/$cluster.$name.keyring

[client.crash]
        keyring = /etc/pve/ceph/$cluster.$name.keyring

[mds]
        keyring = /var/lib/ceph/mds/ceph-$id/keyring

[mds.pve1]
        host = pve1
        mds_standby_for_name = pve

[mds.pve2]
        host = pve2
        mds_standby_for_name = pve

[mds.pve3]
        host = pve3
        mds_standby_for_name = pve

[mon.pve1]
        public_addr = 192.168.0.1

[mon.pve2]
        public_addr = 192.168.0.2

[mon.pve3]
        public_addr = 192.168.0.3
 
Last edited:
Du hast eine vmbr3 angelegt. Diese Bridge nutzt Ceph. Es ist bekannt, dass die Linux vmbr Traffic Limitierungen hat und sollte dafür nicht bei Ceph genutzt werden.

Gib die 192.168.0.x IP einfach der eno12399np0 Netzwerkkarte und lösche die vmbr3. Dann bekommst du 100G Durchsatz.
 
Ok, dann haben aber der Kubernetes Ceph-csi Treiber keine Verbindung mehr zu den Monitoren.
Dann müsste ich ein anderes public_network für das Ceph nehmen?
 
Da würde ich Public und Cluster trennen, eventuell per VLAN auf der gleichen NIC. Dann kann Cluster Nativ laufen und Public über die Bridge. Damit könntest du je nach Workload trotzdem die NIC voll auslasten.
 
Hat nichts gebracht

Code:
root@pve3:~# cat /etc/network/interfaces
# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage parts of the network configuration manually,
# please utilize the 'source' or 'source-directory' directives to do
# so.
# PVE will preserve these directives, but will NOT read its network
# configuration from sourced files, so do not attempt to move any of
# the PVE managed interfaces into external files!

auto lo
iface lo inet loopback

auto ens1f0np0
iface ens1f0np0 inet static
        address 10.144.188.12/24
        gateway 10.144.188.1
#MGMT 2

auto eno8303
iface eno8303 inet static
        address 172.16.1.3/28
#Cluster Netz

iface eno8403 inet manual

iface ens1f1np1 inet manual
#plano

auto ens1f2np2
iface ens1f2np2 inet manual
#VM Netz

auto ens1f3np3
iface ens1f3np3 inet manual
#VM Netz

auto eno12399np0
iface eno12399np0 inet static
        address 192.168.0.3/24
        mtu 9000
#100GB

auto eno12409np1
iface eno12409np1 inet manual
#100GB

auto bond0
iface bond0 inet manual
        bond-slaves ens1f2np2 ens1f3np3
        bond-miimon 100
        bond-mode 802.3ad
        bond-xmit-hash-policy layer2
#VM Netz

auto vmbr1
iface vmbr1 inet static
        address 10.144.189.12/24
        bridge-ports bond0
        bridge-stp off
        bridge-fd 0
        bridge-vlan-aware yes
        bridge-vids 2-4094
#VM Netz

auto vmbr2
iface vmbr2 inet static
        address 192.168.240.4/24
        bridge-ports ens1f1np1.12
        bridge-stp off
        bridge-fd 0
        bridge-vlan-aware yes
        bridge-vids 2-4094
        up ip route add 10.128.0.0/16 via 192.168.240
#plano

auto vmbr3
iface vmbr3 inet manual
        bridge-ports none
        bridge-stp off
        bridge-fd 0
        mtu 9000
#Ceph Netz VMBridge

Code:
root@pve3:~# iperf3 -c 192.168.0.1 C-m
Connecting to host 192.168.0.1, port 5201
[  5] local 192.168.0.3 port 51878 connected to 192.168.0.1 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec  6.28 GBytes  53.9 Gbits/sec    0   1.37 MBytes
[  5]   1.00-2.00   sec  6.36 GBytes  54.6 Gbits/sec    0   1.37 MBytes
[  5]   2.00-3.00   sec  6.37 GBytes  54.7 Gbits/sec    0   1.37 MBytes
[  5]   3.00-4.00   sec  6.39 GBytes  54.9 Gbits/sec    0   1.37 MBytes
[  5]   4.00-5.00   sec  6.10 GBytes  52.4 Gbits/sec    0   1.37 MBytes
[  5]   5.00-6.00   sec  6.11 GBytes  52.5 Gbits/sec    0   1.37 MBytes
[  5]   6.00-7.00   sec  5.62 GBytes  48.3 Gbits/sec   31    690 KBytes
[  5]   7.00-8.00   sec  3.26 GBytes  28.0 Gbits/sec   12    926 KBytes
[  5]   8.00-9.00   sec  3.47 GBytes  29.8 Gbits/sec    8    961 KBytes
[  5]   9.00-10.00  sec  2.26 GBytes  19.4 Gbits/sec    0    961 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  52.2 GBytes  44.9 Gbits/sec   51             sender
[  5]   0.00-10.00  sec  52.2 GBytes  44.9 Gbits/sec                  receiver

Code:
root@pve1:~# cat /etc/network/interfaces
# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage parts of the network configuration manually,
# please utilize the 'source' or 'source-directory' directives to do
# so.
# PVE will preserve these directives, but will NOT read its network
# configuration from sourced files, so do not attempt to move any of
# the PVE managed interfaces into external files!

auto lo
iface lo inet loopback

auto ens1f0np0
iface ens1f0np0 inet static
        address 10.144.188.10/24
        gateway 10.144.188.1
#MGMT

auto eno8303
iface eno8303 inet static
        address 172.16.1.1/28
#Cluster Netz

iface eno8403 inet manual
#frei

iface ens1f1np1 inet manual
#plano

auto ens1f2np2
iface ens1f2np2 inet manual
#VM Netz

auto ens1f3np3
iface ens1f3np3 inet manual
#VM Netz

auto eno12399np0
iface eno12399np0 inet static
        address 192.168.0.1/24
        mtu 9000
#100GB

auto eno12409np1
iface eno12409np1 inet manual
#100GB

auto bond0
iface bond0 inet manual
        bond-slaves ens1f2np2 ens1f3np3
        bond-miimon 100
        bond-mode 802.3ad
        bond-xmit-hash-policy layer2
#VM Netz

auto vmbr1
iface vmbr1 inet static
        address 10.144.189.10/24
        bridge-ports bond0
        bridge-stp off
        bridge-fd 0
        bridge-vlan-aware yes
        bridge-vids 2-4094
#VM Netz

auto vmbr2
iface vmbr2 inet static
        address 192.168.240.2/24
        bridge-ports ens1f1np1.12
        bridge-stp off
        bridge-fd 0
        up ip route add 10.128.0.0/16 via 192.168.240.1
#plano

auto vmbr3
iface vmbr3 inet manual
        bridge-ports none
        bridge-stp off
        bridge-fd 0
        mtu 9000
#Ceph Netz VMBridge

source /etc/network/interfaces.d/*

Code:
Accepted connection from 192.168.0.3, port 51876
[  5] local 192.168.0.1 port 5201 connected to 192.168.0.3 port 51878
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-1.00   sec  6.28 GBytes  53.9 Gbits/sec
[  5]   1.00-2.00   sec  6.36 GBytes  54.7 Gbits/sec
[  5]   2.00-3.00   sec  6.36 GBytes  54.7 Gbits/sec
[  5]   3.00-4.00   sec  6.39 GBytes  54.9 Gbits/sec
[  5]   4.00-5.00   sec  6.10 GBytes  52.4 Gbits/sec
[  5]   5.00-6.00   sec  6.11 GBytes  52.5 Gbits/sec
[  5]   6.00-7.00   sec  5.62 GBytes  48.2 Gbits/sec
[  5]   7.00-8.00   sec  3.27 GBytes  28.1 Gbits/sec
[  5]   8.00-9.00   sec  3.47 GBytes  29.8 Gbits/sec
[  5]   9.00-10.00  sec  2.26 GBytes  19.4 Gbits/sec
[  5]  10.00-10.00  sec   707 KBytes  16.8 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-10.00  sec  52.2 GBytes  44.9 Gbits/sec                  receiver
-----------------------------------------------------------
 
vermutlich hast du das Problem, dass iperf3 nicht multi-threaded ist und bei 100G NICs die Benchmarks oft CPU-bounds sind dadurch. Kannst du mal probieren iperf2 (package iperf) zu installieren und dann den Test mit mehreren Threads laufen zu lassen? Das sollte mit dem switch -P gehen.
 
  • Like
Reactions: Neobin
ok,
das war es

Code:
[SUM] 0.0000-10.0602 sec   116 GBytes  99.2 Gbits/sec
root@pve3:~# iperf -c 192.168.0.1 C-m  -P 1000

Macht das sinn jetzt schon die 2 x 100GB Karten im Bond laufen zu lassen auch wenn zur Zeit nur eine Angeschlossen ist?
Es kommt ja definitiv noch ein Zweiter Switch dazu.

so in etwa:
Code:
+auto bond1
+iface bond1 inet static
+    address 192.168.0.1/24
+    bond-slaves eno12399np0 eno12409np1
+    bond-miimon 100
+    bond-mode 802.3ad
+    bond-xmit-hash-policy layer3+4
+
 
+auto vmbr4
+iface vmbr4 inet static
+    address 192.168.1.1/24
+    bridge-ports bond1.100
+    bridge-stp off
+    bridge-fd 0
+#Ceph Public Netz
 
Da macht es auf jeden Fall Sinn, den Bond vorzubereiten. Macht es nachher deutlich einfacher und schneller.
 
so habe ich es jetzt. Ist mit iperf auf 94GB runter gegangen.

Code:
# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage parts of the network configuration manually,
# please utilize the 'source' or 'source-directory' directives to do
# so.
# PVE will preserve these directives, but will NOT read its network
# configuration from sourced files, so do not attempt to move any of
# the PVE managed interfaces into external files!

auto lo
iface lo inet loopback

auto ens1f0np0
iface ens1f0np0 inet static
        address 10.144.188.10/24
        gateway 10.144.188.1
#MGMT

auto eno8303
iface eno8303 inet static
        address 172.16.1.1/28
#Cluster Netz

iface eno8403 inet manual
#frei

iface ens1f1np1 inet manual
#plano

auto ens1f2np2
iface ens1f2np2 inet manual
#VM Netz

auto ens1f3np3
iface ens1f3np3 inet manual
#VM Netz

auto eno12399np0
iface eno12399np0 inet manual
        mtu 9000
#100GB

auto eno12409np1
iface eno12409np1 inet manual
#100GB

auto bond0
iface bond0 inet manual
        bond-slaves ens1f2np2 ens1f3np3
        bond-miimon 100
        bond-mode 802.3ad
        bond-xmit-hash-policy layer3+4
#VM Netz

auto bond1
iface bond1 inet manual
        bond-slaves eno12399np0 eno12409np1
        bond-miimon 100
        bond-mode 802.3ad
        bond-xmit-hash-policy layer3+4
        mtu 9000
#Ceph Cluster Netz

auto vmbr1
iface vmbr1 inet static
        address 10.144.189.10/24
        bridge-ports bond0
        bridge-stp off
        bridge-fd 0
        bridge-vlan-aware yes
        bridge-vids 2-4094
#VM Netz

auto vmbr2
iface vmbr2 inet static
        address 192.168.240.2/24
        bridge-ports ens1f1np1.12
        bridge-stp off
        bridge-fd 0
        up ip route add 10.128.0.0/16 via 192.168.240.1
#plano

auto vmbr0
iface vmbr0 inet static
        address 192.168.0.1/24
        bridge-ports bond1.10
        bridge-stp off
        bridge-fd 0
        mtu 9000
#Ceph Cluster Netz

auto vmbr3
iface vmbr3 inet static
        address 192.168.1.1/24
        bridge-ports bond1.20
        bridge-stp off
        bridge-fd 0
#Ceph Client

source /etc/network/interfaces.d/*
 
Vorsicht, du hast bei iface eno12399np0 die mtu auf 9000 und bei iface eno12409np1 nicht.
Außerdem hätte ich für das Clusternetz keine Bridge gebaut sonder einfach ein VLAN Interface genutzt. Nur das Client Netz benötigt dein Kubernetes.
 
So langsam aber:

wieder 99GB
Code:
[SUM] 0.0000-10.0343 sec   116 GBytes  99.1 Gbits/sec

Code:
# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage parts of the network configuration manually,
# please utilize the 'source' or 'source-directory' directives to do
# so.
# PVE will preserve these directives, but will NOT read its network
# configuration from sourced files, so do not attempt to move any of
# the PVE managed interfaces into external files!

auto lo
iface lo inet loopback

auto ens1f0np0
iface ens1f0np0 inet static
        address 10.144.188.10/24
        gateway 10.144.188.1
#MGMT

auto eno8303
iface eno8303 inet static
        address 172.16.1.1/28
#Cluster Netz

iface eno8403 inet manual
#frei

iface ens1f1np1 inet manual
#plano

auto ens1f2np2
iface ens1f2np2 inet manual
#VM Netz

auto ens1f3np3
iface ens1f3np3 inet manual
#VM Netz

auto eno12399np0
iface eno12399np0 inet manual
        mtu 9000
#100GB

auto eno12409np1
iface eno12409np1 inet manual
        mtu 9000
#100GB

auto bond0
iface bond0 inet manual
        bond-slaves ens1f2np2 ens1f3np3
        bond-miimon 100
        bond-mode 802.3ad
        bond-xmit-hash-policy layer3+4
#VM Netz

auto bond1
iface bond1 inet manual
        bond-slaves eno12399np0 eno12409np1
        bond-miimon 100
        bond-mode 802.3ad
        bond-xmit-hash-policy layer3+4
        mtu 9000
#Ceph Cluster Netz

auto vmbr1
iface vmbr1 inet static
        address 10.144.189.10/24
        bridge-ports bond0
        bridge-stp off
        bridge-fd 0
        bridge-vlan-aware yes
        bridge-vids 2-4094
#VM Netz

auto vmbr2
iface vmbr2 inet static
        address 192.168.240.2/24
        bridge-ports ens1f1np1.12
        bridge-stp off
        bridge-fd 0
        up ip route add 10.128.0.0/16 via 192.168.240.1
#plano

auto vmbr3
iface vmbr3 inet static
        address 192.168.1.1/24
        bridge-ports bond1.20
        bridge-stp off
        bridge-fd 0
#Ceph Client

auto vlan10
iface vlan10 inet static
        address 192.168.0.1/24
        mtu 9000
        vlan-raw-device bond1
#Ceph Cluster Netz

source /etc/network/interfaces.d/*
 
  • Like
Reactions: Falk R.
Seit dem ich das Netzwerk so umgebaut habe mit dem Clusternetz und dem Public Netzwerk im Vlan bekomme ich die ceph-csi Treiber nicht mehr zum laufen.
Zufall oder kann das zusammenhängen? die Monitore sind aus dem ceph-csi Container erreichbar.
 
Wenn die Monitor erreichbar sind, sollte alles funktionieren. Passt denn die MTU überall?
 
ja, irgend etwas mit der MTU passt das nicht.

Code:
^Croot@pve1:~# ssh root@pve2  ping -M do -s 9000  192.168.1.1
PING 192.168.1.1 (192.168.1.1) 9000(9028) bytes of data.
ping: local error: message too long, mtu=9000
ping: local error: message too long, mtu=9000
ping: local error: message too long, mtu=9000
ping: local error: message too long, mtu=9000
ping: local error: message too long, mtu=9000

Code:
PING 192.168.1.1 (192.168.1.1) 8800(8828) bytes of data.
8808 bytes from 192.168.1.1: icmp_seq=1 ttl=64 time=0.066 ms
8808 bytes from 192.168.1.1: icmp_seq=2 ttl=64 time=0.066 ms
8808 bytes from 192.168.1.1: icmp_seq=3 ttl=64 time=0.071 ms
8808 bytes from 192.168.1.1: icmp_seq=4 ttl=64 time=0.087 ms
8808 bytes from 192.168.1.1: icmp_seq=5 ttl=64 time=0.060 ms
8808 bytes from 192.168.1.1: icmp_seq=6 ttl=64 time=0.083 ms
8808 bytes from 192.168.1.1: icmp_seq=7 ttl=64 time=0.071 ms
8808 bytes from 192.168.1.1: icmp_seq=8 ttl=64 time=0.077 ms
8808 bytes from 192.168.1.1: icmp_seq=9 ttl=64 time=0.068 ms
8808 bytes from 192.168.1.1: icmp_seq=10 ttl=64 time=0.073 ms
8808 bytes from 192.168.1.1: icmp_seq=11 ttl=64 time=0.083 ms
8808 bytes from 192.168.1.1: icmp_seq=12 ttl=64 time=0.070 ms

muss man da nicht was wegen lacp abziehen?
 
Last edited:
Wie du siehst, wenn du einen Ping mit 9000er Daten machst, ist das ausgehende IP Paket 9028.
Also Ping mal mit 8972er Paketen.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!