VMs hängen sich auf wenn..

a1d3s

Active Member
May 23, 2017
20
0
41
45
Hallo.
Ich habe derzeit 4 Nodes in einem Cluster mit CEPH . PVE Virtual Environment 4.4-13/7ea56165
Auf diesem Cluster befinden sich ca 15 Maschinen.
Wenn ich jetzt eine neue VM einrichte und von einem anderen System aus per rsync ca 100GB in diese VM kopieren will , stürzen mir die anderen 15 VM mit Kernel hangs up , ab.
Angebunden sind die 4 Nodes per 10GBit.
Code:
May 23 11:29:08 pve1 snmpd[1960]: error on subcontainer 'ia_addr' insert (-1)
May 23 11:29:08 pve1 snmpd[1960]: error on subcontainer 'ia_addr' insert (-1)
May 23 11:29:08 pve1 snmpd[1960]: error on subcontainer 'ia_addr' insert (-1)
May 23 11:29:38 pve1 snmpd[1960]: error on subcontainer 'ia_addr' insert (-1)
May 23 11:29:38 pve1 snmpd[1960]: error on subcontainer 'ia_addr' insert (-1)
May 23 11:29:38 pve1 snmpd[1960]: error on subcontainer 'ia_addr' insert (-1)
May 23 11:29:48 pve1 rrdcached[1798]: flushing old values
May 23 11:29:48 pve1 rrdcached[1798]: rotating journals
May 23 11:29:48 pve1 rrdcached[1798]: started new journal /var/lib/rrdcached/journal/rrd.journal.1495531788.045170
May 23 11:29:48 pve1 rrdcached[1798]: removing old journal /var/lib/rrdcached/journal/rrd.journal.1495524588.045129
May 23 11:29:49 pve1 pmxcfs[1908]: [dcdb] notice: data verification successful
 
Code:
root@pve3:~# lspci
00:00.0 Host bridge: Intel Corporation 3200/3210 Chipset DRAM Controller
00:01.0 PCI bridge: Intel Corporation 3200/3210 Chipset Host-Primary PCI Express Bridge
00:06.0 PCI bridge: Intel Corporation 3210 Chipset Host-Secondary PCI Express Bridge
00:19.0 Ethernet controller: Intel Corporation 82566DM-2 Gigabit Network Connection (rev 02)
00:1a.0 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #4 (rev 02)
00:1a.1 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #5 (rev 02)
00:1a.7 USB controller: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #2 (rev 02)
00:1c.0 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 1 (rev 02)
00:1c.4 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 5 (rev 02)
00:1d.0 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #1 (rev 02)
00:1d.1 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #2 (rev 02)
00:1d.2 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #3 (rev 02)
00:1d.7 USB controller: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #1 (rev 02)
00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev 92)
00:1f.0 ISA bridge: Intel Corporation 82801IR (ICH9R) LPC Interface Controller (rev 02)
00:1f.2 SATA controller: Intel Corporation 82801IR/IO/IH (ICH9R/DO/DH) 6 port SATA Controller [AHCI mode] (rev 02)
00:1f.3 SMBus: Intel Corporation 82801I (ICH9 Family) SMBus Controller (rev 02)
01:00.0 Ethernet controller: Intel Corporation Ethernet Controller 10-Gigabit X540-AT2 (rev 01)
01:00.1 Ethernet controller: Intel Corporation Ethernet Controller 10-Gigabit X540-AT2 (rev 01)
02:00.0 RAID bus controller: 3ware Inc 9690SA SAS/SATA-II RAID PCIe (rev 01)
03:00.0 PCI bridge: Integrated Device Technology, Inc. [IDT] PES12N3A PCI Express Switch (rev 0e)
04:02.0 PCI bridge: Integrated Device Technology, Inc. [IDT] PES12N3A PCI Express Switch (rev 0e)
04:04.0 PCI bridge: Integrated Device Technology, Inc. [IDT] PES12N3A PCI Express Switch (rev 0e)
05:00.0 Ethernet controller: Intel Corporation 82571EB Gigabit Ethernet Controller (Copper) (rev 06)
05:00.1 Ethernet controller: Intel Corporation 82571EB Gigabit Ethernet Controller (Copper) (rev 06)
06:00.0 Ethernet controller: Intel Corporation 82571EB Gigabit Ethernet Controller (Copper) (rev 06)
06:00.1 Ethernet controller: Intel Corporation 82571EB Gigabit Ethernet Controller (Copper) (rev 06)
07:00.0 VGA compatible controller: Matrox Electronics Systems Ltd. MGA G200e [Pilot] ServerEngines (SEP1) (rev 02)
08:02.0 Ethernet controller: Intel Corporation 82541GI Gigabit Ethernet Controller (rev 05)
mii-tool
Code:
root@pve3:~# mii-tool
eth0: no link
SIOCGMIIREG on eth1 failed: Input/output error
SIOCGMIIREG on eth1 failed: Input/output error
eth1: no link
SIOCGMIIREG on eth2 failed: Input/output error
SIOCGMIIREG on eth2 failed: Input/output error
eth2: negotiated 100baseTx-FD, link ok
SIOCGMIIPHY on 'eth3' failed: Operation not supported
SIOCGMIIREG on eth4 failed: Input/output error
SIOCGMIIREG on eth4 failed: Input/output error
eth4: negotiated 1000baseT-FD flow-control, link ok
SIOCGMIIREG on eth5 failed: Input/output error
SIOCGMIIREG on eth5 failed: Input/output error
eth5: negotiated 1000baseT-FD flow-control, link ok
SIOCGMIIREG on eth6 failed: Input/output error
SIOCGMIIREG on eth6 failed: Input/output error
eth6: negotiated 100baseTx-FD, link ok
SIOCGMIIPHY on 'eth7' failed: Operation not supported

mir ist nur nicht klar warum die Karten dort nur 100Mbit machen , was aber wiederum erklären würde warum die anderen Maschinen einfrieren/abstürzen.
 
Scheint irgendwas überhaupt nicht zu passen. Was sagt den "ethtool" zu deinen Netzwerkkarten? Hat der Switch mit den 10Gbit Karten auch wirklich die 10Gbit und der Switch mit den Gigabit Verbindungen auch? Getrunkt?
 
Also ich habe jetzt nochmal geprüft:
Node1 und Node2 haben je nur 1GBit Karten , diese beiden machen aber eigentlich nichts weiter als die Maschinen zu starten.
Node3 und Node4 sind die Server für CEPH.
Dort war auch am Node3 ein Fehler das die Karten nur mit 1GBit angebunden waren.
Code:
root@pve3:~# ethtool eth7
Settings for eth7:
        Supported ports: [ TP ]
        Supported link modes:   100baseT/Full
                                1000baseT/Full
                                10000baseT/Full
        Supported pause frame use: No
        Supports auto-negotiation: Yes
        Advertised link modes:  100baseT/Full
                                1000baseT/Full
                                10000baseT/Full
        Advertised pause frame use: Symmetric
        Advertised auto-negotiation: Yes
        Speed: 10000Mb/s
        Duplex: Full
        Port: Twisted Pair
        PHYAD: 0
        Transceiver: external
        Auto-negotiation: on
        MDI-X: Unknown
        Supports Wake-on: d
        Wake-on: d
        Current message level: 0x00000007 (7)
                               drv probe link
        Link detected: yes

Das ist jetzt das Ergebnis. Die beiden 10GBit hängen jeweils auf einem eigenen 10GBit Switch , dies wird nun auch richtig erkannt.
Updates habe ich auf allen 4 Nodes jetzt die aktuellen eingespielt.
Sobald ich aber einen rsync von einer externen Maschine in eine VM starte , hängen die anderen sich mit Kernel hung auf.

Code:
2017-06-06 08:59:43.057988 mon.0 10.10.10.1:6789/0 3715 : cluster [INF] pgmap v4999775: 64 pgs: 64 active+clean; 486 GB data, 955 GB used, 36181 GB / 37137 GB avail; 3139 B/s rd, 29835 kB/s wr, 32 op/s
2017-06-06 08:59:44.201997 mon.0 10.10.10.1:6789/0 3716 : cluster [INF] pgmap v4999776: 64 pgs: 64 active+clean; 486 GB data, 955 GB used, 36181 GB / 37137 GB avail; 13745 B/s rd, 17658 kB/s wr, 36 op/s
2017-06-06 08:59:45.436612 mon.0 10.10.10.1:6789/0 3717 : cluster [INF] pgmap v4999777: 64 pgs: 64 active+clean; 486 GB data, 955 GB used, 36181 GB / 37137 GB avail; 13726 B/s rd, 10765 kB/s wr, 28 op/s
2017-06-06 08:59:46.828684 mon.0 10.10.10.1:6789/0 3718 : cluster [INF] pgmap v4999778: 64 pgs: 64 active+clean; 486 GB data, 955 GB used, 36181 GB / 37137 GB avail; 1599 B/s wr, 0 op/s
2017-06-06 08:59:48.053736 mon.0 10.10.10.1:6789/0 3719 : cluster [INF] pgmap v4999779: 64 pgs: 64 active+clean; 486 GB data, 955 GB used, 36181 GB / 37137 GB avail; 13018 B/s wr, 4 op/s
2017-06-06 08:59:48.780342 mon.0 10.10.10.1:6789/0 3720 : cluster [INF] HEALTH_WARN; too few PGs per OSD (6 < min 30)
2017-06-06 08:59:49.278553 mon.0 10.10.10.1:6789/0 3721 : cluster [INF] pgmap v4999780: 64 pgs: 64 active+clean; 486 GB data, 955 GB used, 36181 GB / 37137 GB avail; 5045 B/s rd, 11440 kB/s wr, 37 op/s
2017-06-06 08:59:50.638865 mon.0 10.10.10.1:6789/0 3722 : cluster [INF] pgmap v4999781: 64 pgs: 64 active+clean; 486 GB data, 955 GB used, 36181 GB / 37137 GB avail; 4620 B/s rd, 10477 kB/s wr, 34 op/s
2017-06-06 08:59:51.763512 mon.0 10.10.10.1:6789/0 3723 : cluster [INF] pgmap v4999782: 64 pgs: 64 active+clean; 486 GB data, 955 GB used, 36181 GB / 37137 GB avail; 31691 B/s wr, 13 op/s
2017-06-06 08:59:53.094849 mon.0 10.10.10.1:6789/0 3724 : cluster [INF] pgmap v4999783: 64 pgs: 64 active+clean; 486 GB data, 955 GB used, 36181 GB / 37137 GB avail; 25244 B/s wr, 9 op/s
2017-06-06 08:59:54.296824 mon.0 10.10.10.1:6789/0 3725 : cluster [INF] pgmap v4999784: 64 pgs: 64 active+clean; 486 GB data, 955 GB used, 36181 GB / 37137 GB avail; 8338 B/s rd, 17258 kB/s wr, 69 op/s
2017-06-06 08:59:55.587888 mon.0 10.10.10.1:6789/0 3726 : cluster [INF] pgmap v4999785: 64 pgs: 64 active+clean; 486 GB data, 955 GB used, 36181 GB / 37137 GB avail; 8115 B/s rd, 19711 kB/s wr, 71 op/s
2017-06-06 08:59:56.897987 mon.0 10.10.10.1:6789/0 3727 : cluster [INF] pgmap v4999786: 64 pgs: 64 active+clean; 486 GB data, 955 GB used, 36181 GB / 37137 GB avail; 6175 B/s rd, 7848 kB/s wr, 17 op/s
2017-06-06 08:59:58.156754 mon.0 10.10.10.1:6789/0 3728 : cluster [INF] pgmap v4999787: 64 pgs: 64 active+clean; 486 GB data, 955 GB used, 36181 GB / 37137 GB avail; 6681 B/s rd, 5519 kB/s wr, 19 op/s
2017-06-06 08:59:59.325290 mon.0 10.10.10.1:6789/0 3729 : cluster [INF] pgmap v4999788: 64 pgs: 64 active+clean; 486 GB data, 955 GB used, 36181 GB / 37137 GB avail; 9735 kB/s wr, 46 op/s
2017-06-06 09:00:00.000179 mon.0 10.10.10.1:6789/0 3730 : cluster [INF] HEALTH_WARN; 17 requests are blocked > 32 sec; too few PGs per OSD (6 < min 30)
2017-06-06 09:00:00.765264 mon.0 10.10.10.1:6789/0 3731 : cluster [INF] pgmap v4999789: 64 pgs: 64 active+clean; 486 GB data, 955 GB used, 36181 GB / 37137 GB avail; 12089 kB/s wr, 41 op/s
2017-06-06 09:00:02.075191 mon.0 10.10.10.1:6789/0 3732 : cluster [INF] pgmap v4999790: 64 pgs: 64 active+clean; 486 GB data, 955 GB used, 36181 GB / 37137 GB avail; 1550 B/s rd, 6236 kB/s wr, 9 op/s
2017-06-06 09:00:03.691862 mon.0 10.10.10.1:6789/0 3733 : cluster [INF] pgmap v4999791: 64 pgs: 64 active+clean; 486 GB data, 955 GB used, 36181 GB / 37137 GB avail; 1558 B/s rd, 7858 kB/s wr, 18 op/s
2017-06-06 09:00:05.010072 mon.0 10.10.10.1:6789/0 3734 : cluster [INF] pgmap v4999792: 64 pgs: 64 active+clean; 486 GB data, 955 GB used, 36181 GB / 37137 GB avail; 1391 B/s rd, 7379 kB/s wr, 28 op/s
2017-06-06 09:00:06.543150 mon.0 10.10.10.1:6789/0 3735 : cluster [INF] pgmap v4999793: 64 pgs: 64 active+clean; 486 GB data, 955 GB used, 36181 GB / 37137 GB avail; 1400 B/s rd, 9981 kB/s wr, 33 op/s
2017-06-06 09:00:07.859962 mon.0 10.10.10.1:6789/0 3736 : cluster [INF] pgmap v4999794: 64 pgs: 64 active+clean; 486 GB data, 955 GB used, 36181 GB / 37137 GB avail; 8414 kB/s wr, 28 op/s
2017-06-06 09:00:09.003544 mon.0 10.10.10.1:6789/0 3737 : cluster [INF] pgmap v4999795: 64 pgs: 64 active+clean; 486 GB data, 955 GB used, 36181 GB / 37137 GB avail; 246 kB/s rd, 14285 kB/s wr, 130 op/s
2017-06-06 09:00:10.260690 mon.0 10.10.10.1:6789/0 3738 : cluster [INF] pgmap v4999796: 64 pgs: 64 active+clean; 486 GB data, 955 GB used, 36181 GB / 37137 GB avail; 286 kB/s rd, 19839 kB/s wr, 168 op/s
2017-06-06 09:00:11.436811 mon.0 10.10.10.1:6789/0 3739 : cluster [INF] pgmap v4999797: 64 pgs: 64 active+clean; 486 GB data, 955 GB used, 36181 GB / 37137 GB avail; 17064 B/s rd, 5294 kB/s wr, 72 op/s
2017-06-06 09:00:04.726561 osd.4 10.10.10.4:6820/3737 5 : cluster [WRN] 3 slow requests, 3 included below; oldest blocked for > 30.201571 secs
2017-06-06 09:00:04.726576 osd.4 10.10.10.4:6820/3737 6 : cluster [WRN] slow request 30.201571 seconds old, received at 2017-06-06 08:59:34.524892: osd_op(client.12784103.1:1058 2.ee06bf5f rbd_data.a534602ae8944a.00000000000013cf [set-alloc-hint object_size 4194304 write_size 4194304,write 1048576~3145728] snapc 0=[] ondisk+write e425) currently waiting for subops from 18
2017-06-06 09:00:04.726582 osd.4 10.10.10.4:6820/3737 7 : cluster [WRN] slow request 30.190284 seconds old, received at 2017-06-06 08:59:34.536179: osd_op(client.12784103.1:1060 2.4e32601f rbd_data.a534602ae8944a.00000000000013d1 [set-alloc-hint object_size 4194304 write_size 4194304,write 1048576~3145728] snapc 0=[] ondisk+write e425) currently waiting for subops from 18
2017-06-06 09:00:04.726591 osd.4 10.10.10.4:6820/3737 8 : cluster [WRN] slow request 30.158820 seconds old, received at 2017-06-06 08:59:34.567643: osd_op(client.12784103.1:1075 2.4e32601f rbd_data.a534602ae8944a.00000000000013d1 [set-alloc-hint object_size 4194304 write_size 4194304,write 0~1048576] snapc 0=[] ondisk+write e425) currently waiting for subops from 18
2017-06-06 09:00:06.726862 osd.4 10.10.10.4:6820/3737 9 : cluster [WRN] 4 slow requests, 1 included below; oldest blocked for > 32.201898 secs
2017-06-06 09:00:06.726871 osd.4 10.10.10.4:6820/3737 10 : cluster [WRN] slow request 30.328605 seconds old, received at 2017-06-06 08:59:36.398185: osd_op(client.1405111.1:127109 2.db629b9f rbd_data.7f3448238e1f29.0000000000000054 [set-alloc-hint object_size 4194304 write_size 4194304,write 1572864~4096] snapc 0=[] ondisk+write e425) currently waiting for subops from 18
2017-06-06 09:00:13.220433 mon.0 10.10.10.1:6789/0 3740 : cluster [INF] pgmap v4999798: 64 pgs: 64 active+clean; 486 GB data, 955 GB used, 36181 GB / 37137 GB avail; 14491 B/s rd, 304 kB/s wr, 44 op/s
2017-06-06 09:00:14.480753 mon.0 10.10.10.1:6789/0 3741 : cluster [INF] pgmap v4999799: 64 pgs: 64 active+clean; 486 GB data, 955 GB used, 36181 GB / 37137 GB avail; 56737 B/s rd, 15443 kB/s wr, 113 op/s
2017-06-06 09:00:16.380095 mon.0 10.10.10.1:6789/0 3742 : cluster [INF] pgmap v4999800: 64 pgs: 64 active+clean; 486 GB data, 955 GB used, 36181 GB / 37137 GB avail; 59824 B/s rd, 16007 kB/s wr, 158 op/s
2017-06-06 09:00:18.155586 mon.0 10.10.10.1:6789/0 3743 : cluster [INF] pgmap v4999801: 64 pgs: 64 active+clean; 486 GB data, 955 GB used, 36181 GB / 37137 GB avail; 666 kB/s rd, 714 kB/s wr, 108 op/s
2017-06-06 09:00:19.824415 mon.0 10.10.10.1:6789/0 3744 : cluster [INF] pgmap v4999802: 64 pgs: 64 active+clean; 486 GB data, 955 GB used, 36181 GB / 37137 GB avail; 1082 kB/s rd, 21686 kB/s wr, 136 op/s
2017-06-06 09:00:21.431702 mon.0 10.10.10.1:6789/0 3745 : cluster [INF] pgmap v4999803: 64 pgs: 64 active+clean; 486 GB data, 955 GB used, 36181 GB / 37137 GB avail; 479 kB/s rd, 21854 kB/s wr, 88 op/s
2017-06-06 09:00:22.672776 mon.0 10.10.10.1:6789/0 3746 : cluster [INF] pgmap v4999804: 64 pgs: 64 active+clean; 486 GB data, 955 GB used, 36181 GB / 37137 GB avail; 807 kB/s wr, 21 op/s
2017-06-06 09:00:23.908558 mon.0 10.10.10.1:6789/0 3747 : cluster [INF] pgmap v4999805: 64 pgs: 64 active+clean; 486 GB data, 955 GB used, 36181 GB / 37137 GB avail; 84577 B/s rd, 15109 kB/s wr, 42 op/s
2017-06-06 09:00:25.107541 mon.0 10.10.10.1:6789/0 3748 : cluster [INF] pgmap v4999806: 64 pgs: 64 active+clean; 486 GB data, 955 GB used, 36181 GB / 37137 GB avail; 177 kB/s rd, 19618 kB/s wr, 44 op/s
2017-06-06 09:00:26.700220 mon.0 10.10.10.1:6789/0 3749 : cluster [INF] pgmap v4999807: 64 pgs: 64 active+clean; 486 GB data, 955 GB used, 36181 GB / 37137 GB avail; 224 kB/s rd, 15276 kB/s wr, 26 op/s
2017-06-06 09:00:27.883163 mon.0 10.10.10.1:6789/0 3750 : cluster [INF] pgmap v4999808: 64 pgs: 64 active+clean; 486 GB data, 955 GB used, 36181 GB / 37137 GB avail; 124 kB/s rd, 12394 kB/s wr, 18 op/s
2017-06-06 09:00:29.093702 mon.0 10.10.10.1:6789/0 3751 : cluster [INF] pgmap v4999809: 64 pgs: 64 active+clean; 486 GB data, 955 GB used, 36181 GB / 37137 GB avail; 28460 B/s rd, 6926 kB/s wr, 48 op/s
2017-06-06 09:00:30.282864 mon.0 10.10.10.1:6789/0 3752 : cluster [INF] pgmap v4999810: 64 pgs: 64 active+clean; 486 GB data, 955 GB used, 36181 GB / 37137 GB avail; 51366 B/s rd, 25310 kB/s wr, 86 op/s

2017-06-06 09:00:55.242515 osd.5 10.10.10.4:6800/2871 1 : cluster [WRN] 1 slow requests, 1 included below; oldest blocked for > 30.174485 secs
2017-06-06 09:00:55.242528 osd.5 10.10.10.4:6800/2871 2 : cluster [WRN] slow request 30.174485 seconds old, received at 2017-06-06 09:00:25.067853: osd_op(client.12774101.1:18092 2.69c680b6 rbd_data.2f20cc238e1f29.00000000000006e6 [set-alloc-hint object_size 4194304 write_size 4194304,write 1753088~2441216] snapc 0=[] ondisk+write e425) currently waiting for subops from 12
2017-06-06 09:00:57.242819 osd.5 10.10.10.4:6800/2871 3 : cluster [WRN] 3 slow requests, 2 included below; oldest blocked for > 32.174872 secs
2017-06-06 09:00:57.242829 osd.5 10.10.10.4:6800/2871 4 : cluster [WRN] slow request 30.252099 seconds old, received at 2017-06-06 09:00:26.990626: osd_op(client.12784103.1:1327 2.4fbd97a8 rbd_data.a534602ae8944a.000000000000145e [set-alloc-hint object_size 4194304 write_size 4194304,write 1048576~3145728] snapc 0=[] ondisk+write e425) currently waiting for subops from 12
2017-06-06 09:00:57.242838 osd.5 10.10.10.4:6800/2871 5 : cluster [WRN] slow request 30.189841 seconds old, received at 2017-06-06 09:00:27.052884: osd_op(client.12784103.1:1397 2.59629c18 rbd_data.a534602ae8944a.0000000000001480 [set-alloc-hint object_size 4194304 write_size 4194304,write 1048576~3145728] snapc 0=[] ondisk+write e425) currently waiting for subops from 16
2017-06-06 09:00:57.682966 mon.0 10.10.10.1:6789/0 3771 : cluster [INF] pgmap v4999828: 64 pgs: 64 active+clean; 486 GB data, 956 GB used, 36180 GB / 37137 GB avail; 10138 kB/s wr, 14 op/s
2017-06-06 09:00:57.025312 osd.10 10.10.10.4:6808/3167 3 : cluster [WRN] 3 slow requests, 3 included below; oldest blocked for > 30.102529 secs
2017-06-06 09:00:57.025327 osd.10 10.10.10.4:6808/3167 4 : cluster [WRN] slow request 30.102529 seconds old, received at 2017-06-06 09:00:26.922606: osd_op(client.12784103.1:1335 2.f8c8b0b4 rbd_data.a534602ae8944a.0000000000001462 [set-alloc-hint object_size 4194304 write_size 4194304,write 1048576~3145728] snapc 0=[] ondisk+write e425) currently waiting for subops from 15
2017-06-06 09:00:57.025338 osd.10 10.10.10.4:6808/3167 5 : cluster [WRN] slow request 30.024554 seconds old, received at 2017-06-06 09:00:27.000582: osd_op(client.12784103.1:1358 2.41e71934 rbd_data.a534602ae8944a.000000000000146f [set-alloc-hint object_size 4194304 write_size 4194304,write 0~1048576] snapc 0=[] ondisk+write e425) currently waiting for subops from 15
2017-06-06 09:00:57.025344 osd.10 10.10.10.4:6808/3167 6 : cluster [WRN] slow request 30.020676 seconds old, received at 2017-06-06 09:00:27.004459: osd_op(client.12784103.1:1361 2.41e71934 rbd_data.a534602ae8944a.000000000000146f [set-alloc-hint object_size 4194304 write_size 4194304,write 1048576~3145728] snapc 0=[] ondisk+write e425) currently waiting for subops from 15
2017-06-06 09:00:57.988581 osd.19 10.10.10.3:6836/4555 145 : cluster [WRN] 1 slow requests, 1 included below; oldest blocked for > 30.759341 secs
2017-06-06 09:00:57.988588 osd.19 10.10.10.3:6836/4555 146 : cluster [WRN] slow request 30.759341 seconds old, received at 2017-06-06 09:00:27.229179: osd_op(client.12784103.1:1228 2.14f0f827 rbd_data.a534602ae8944a.000000000000142b [set-alloc-hint object_size 4194304 write_size 4194304,write 118784~4075520] snapc 0=[] ondisk+write e425) currently waiting for subops from 8
2017-06-06 09:00:58.285551 osd.20 10.10.10.3:6832/4303 70 : cluster [WRN] 1 slow requests, 1 included below; oldest blocked for > 30.637472 secs
2017-06-06 09:00:58.285563 osd.20 10.10.10.3:6832/4303 71 : cluster [WRN] slow request 30.637472 seconds old, received at 2017-06-06 09:00:27.648031: osd_op(client.12784103.1:1224 2.f32c2e53 rbd_data.a534602ae8944a.0000000000001429 [set-alloc-hint object_size 4194304 write_size 4194304,write 118784~4075520] snapc 0=[] ondisk+write e425) currently waiting for subops from 4
2017-06-06 09:00:58.913228 mon.0 10.10.10.1:6789/0 3772 : cluster [INF] pgmap v4999829: 64 pgs: 64 active+clean; 486 GB data, 956 GB used, 36180 GB / 37137 GB avail; 12616 kB/s wr, 18 op/s
2017-06-06 09:01:00.146249 mon.0 10.10.10.1:6789/0 3773 : cluster [INF] pgmap v4999830: 64 pgs: 64 active+clean; 486 GB data, 956 GB used, 36180 GB / 37137 GB avail; 13841 kB/s wr, 18 op/s

2017-06-06 09:00:55.753508 osd.8 10.10.10.4:6812/3337 3 : cluster [WRN] 1 slow requests, 1 included below; oldest blocked for > 30.535736 secs
2017-06-06 09:00:55.753536 osd.8 10.10.10.4:6812/3337 4 : cluster [WRN] slow request 30.535736 seconds old, received at 2017-06-06 09:00:25.217686: osd_op(client.12774101.1:18136 2.52b5b6bb rbd_data.2f20cc238e1f29.000000000000163a [set-alloc-hint object_size 4194304 write_size 4194304,write 1048576~3145728] snapc 0=[] ondisk+write e425) currently waiting for subops from 14
2017-06-06 09:00:57.732412 osd.4 10.10.10.4:6820/3737 11 : cluster [WRN] 5 slow requests, 5 included below; oldest blocked for > 30.791797 secs
2017-06-06 09:00:57.732420 osd.4 10.10.10.4:6820/3737 12 : cluster [WRN] slow request 30.671010 seconds old, received at 2017-06-06 09:00:27.061214: osd_op(client.12784103.1:1385 2.e82ff09f rbd_data.a534602ae8944a.000000000000147a [set-alloc-hint object_size 4194304 write_size 4194304,write 1048576~3145728] snapc 0=[] ondisk+write e425) currently waiting for subops from 18
2017-06-06 09:00:57.732433 osd.4 10.10.10.4:6820/3737 13 : cluster [WRN] slow request 30.655459 seconds old, received at 2017-06-06 09:00:27.076765: osd_op(client.12784103.1:1404 2.9c114cdf rbd_data.a534602ae8944a.0000000000001485 [set-alloc-hint object_size 4194304 write_size 4194304,write 0~1048576] snapc 0=[] ondisk+write e425) currently waiting for subops from 18
2017-06-06 09:00:57.732438 osd.4 10.10.10.4:6820/3737 14 : cluster [WRN] slow request 30.652948 seconds old, received at 2017-06-06 09:00:27.079276: osd_op(client.12784103.1:1407 2.9c114cdf rbd_data.a534602ae8944a.0000000000001485 [set-alloc-hint object_size 4194304 write_size 4194304,write 1048576~3145728] snapc 0=[] ondisk+write e425) currently waiting for subops from 18
2017-06-06 09:00:57.732445 osd.4 10.10.10.4:6820/3737 15 : cluster [WRN] slow request 30.641371 seconds old, received at 2017-06-06 09:00:27.090854: osd_op(client.12784103.1:1410 2.e82ff09f rbd_data.a534602ae8944a.000000000000147a [set-alloc-hint object_size 4194304 write_size 4194304,write 0~1048576] snapc 0=[] ondisk+write e425) currently waiting for subops from 18
2017-06-06 09:00:57.732450 osd.4 10.10.10.4:6820/3737 16 : cluster [WRN] slow request 30.791797 seconds old, received at 2017-06-06 09:00:26.940428: osd_op(client.12784103.1:1338 2.24ed1b5f rbd_data.a534602ae8944a.0000000000001464 [set-alloc-hint object_size 4194304 write_size 4194304,write 1048576~3145728] snapc 0=[] ondisk+write e425) currently waiting for subops from 18
2017-06-06 09:00:57.753884 osd.8 10.10.10.4:6812/3337 5 : cluster [WRN] 6 slow requests, 5 included below; oldest blocked for > 30.892938 secs

2017-06-06 09:01:09.989998 osd.19 10.10.10.3:6836/4555 162 : cluster [WRN] 1 slow requests, 1 included below; oldest blocked for > 30.825244 secs
2017-06-06 09:01:09.990005 osd.19 10.10.10.3:6836/4555 163 : cluster [WRN] slow request 30.825244 seconds old, received at 2017-06-06 09:00:39.164704: osd_op(client.12784103.1:1234 2.5ae922a7 rbd_data.a534602ae8944a.000000000000142e [set-alloc-hint object_size 4194304 write_size 4194304,write 118784~4075520] snapc 0=[] ondisk+write e425) currently waiting for subops from 8
2017-06-06 09:01:20.744132 mon.0 10.10.10.1:6789/0 3789 : cluster [INF] pgmap v4999846: 64 pgs: 64 active+clean; 486 GB data, 956 GB used, 36180 GB / 37137 GB avail; 14211 kB/s wr, 39 op/s
2017-06-06 09:01:12.756191 osd.8 10.10.10.4:6812/3337 18 : cluster [WRN] 2 slow requests, 1 included below; oldest blocked for > 45.744281 secs
2017-06-06 09:01:12.756202 osd.8 10.10.10.4:6812/3337 19 : cluster [WRN] slow request 30.400018 seconds old, received at 2017-06-06 09:00:42.356079: osd_op(client.1405111.1:128086 2.f72bb205 rbd_data.46b8ec238e1f29.0000000000005c83 [set-alloc-hint object_size 4194304 write_size 4194304,write 1376256~2818048] snapc 0=[] ondisk+write e425) currently waiting for subops from 1
2017-06-06 09:01:15.735115 osd.4 10.10.10.4:6820/3737 30 : cluster [WRN] 7 slow requests, 1 included below; oldest blocked for > 48.644190 secs
2017-06-06 09:01:15.735125 osd.4 10.10.10.4:6820/3737 31 : cluster [WRN] slow request 30.431899 seconds old, received at 2017-06-06 09:00:45.303145: osd_op(client.1405111.1:128102 2.bbe56121 rbd_data.46b8ec238e1f29.0000000000008aea [set-alloc-hint object_size 4194304 write_size 4194304,write 0~2097152] snapc 0=[] ondisk+write e425) currently waiting for subops from 21
2017-06-06 09:01:16.735364 osd.4 10.10.10.4:6820/3737 32 : cluster [WRN] 9 slow requests, 2 included below; oldest blocked for > 49.644348 secs
2017-06-06 09:01:16.735378 osd.4 10.10.10.4:6820/3737 33 : cluster [WRN] slow request 30.764997 seconds old, received at 2017-06-06 09:00:45.970204: osd_op(client.12784103.1:1445 2.e6d08d21 rbd_data.a534602ae8944a.000000000009794d [set-alloc-hint object_size 4194304 write_size 4194304,write 0~1622016] snapc 0=[] ondisk+write e425) currently waiting for subops from 21
2017-06-06 09:01:16.735385 osd.4 10.10.10.4:6820/3737 34 : cluster [WRN] slow request 30.761092 seconds old, received at 2017-06-06 09:00:45.974110: osd_op(client.12784103.1:1448 2.e6d08d21 rbd_data.a534602ae8944a.000000000009794d [set-alloc-hint object_size 4194304 write_size 4194304,write 1622016~2572288] snapc 0=[] ondisk+write e425) currently waiting for subops from 21
2017-06-06 09:01:17.028662 osd.9 10.10.10.4:6816/3502 12 : cluster [WRN] 2 slow requests, 1 included below; oldest blocked for > 38.204303 secs
2017-06-06 09:01:17.028681 osd.9 10.10.10.4:6816/3502 13 : cluster [WRN] slow request 30.338568 seconds old, received at 2017-06-06 09:00:46.690015: osd_op(client.1405111.1:128352 2.f0441dc6 rbd_data.5e0c442ae8944a.0000000000000000 [set-alloc-hint object_size 4194304 write_size 4194304,write 1048576~8192] snapc 0=[] ondisk+write e425) currently waiting for subops from 19
2017-06-06 09:01:19.757034 osd.8 10.10.10.4:6812/3337 20 : cluster [WRN] 1 slow requests, 1 included below; oldest blocked for > 30.043494 secs
2017-06-06 09:01:19.757048 osd.8 10.10.10.4:6812/3337 21 : cluster [WRN] slow request 30.043494 seconds old, received at 2017-06-06 09:00:49.713461: osd_op(client.1405111.1:128092 2.66e7dd05 rbd_data.46b8ec238e1f29.0000000000008ae2 [set-alloc-hint object_size 4194304 write_size 4194304,write 2097152~2097152] snapc 0=[] ondisk+write e425) currently waiting for subops from 1
2017-06-06 09:01:21.945036 mon.0 10.10.10.1:6789/0 3790 : cluster [INF] pgmap v4999847: 64 pgs: 64 active+clean; 486 GB data, 956 GB used, 36180 GB / 37137 GB avail; 12225 kB/s wr, 36 op/s
2017-06-06 09:01:23.095494 mon.0 10.10.10.1:6789/0 3791 : cluster [INF] pgmap v4999848: 64 pgs: 64 active+clean; 486 GB data, 956 GB used, 36180 GB / 37137 GB avail; 6847 kB/s wr, 23 op/s
2017-06-06 09:01:24.497095 mon.0 10.10.10.1:6789/0 3792 : cluster [INF] pgmap v4999849: 64 pgs: 64 active+clean; 486 GB data, 956 GB used, 36180 GB / 37137 GB avail; 54586 B/s rd, 16223 kB/s wr, 37 op/s
2017-06-06 09:01:25.674167 mon.0 10.10.10.1:6789/0 3794 : cluster [INF] pgmap v4999850: 64 pgs: 64 active+clean; 486 GB data, 956 GB used, 36180 GB / 37137 GB avail; 51385 B/s rd, 20224 kB/s wr, 42 op/s
2017-06-06 09:01:26.807653 mon.0 10.10.10.1:6789/0 3795 : cluster [INF] pgmap v4999851: 64 pgs: 64 active+clean; 486 GB data, 956 GB used, 36180 GB / 37137 GB avail; 19436 B/s rd, 9759 kB/s wr, 30 op/s
2017-06-06 09:01:25.247311 osd.5 10.10.10.4:6800/2871 26 : cluster [WRN] 4 slow requests, 4 included below; oldest blocked for > 30.960180 secs
2017-06-06 09:01:25.247324 osd.5 10.10.10.4:6800/2871 27 : cluster [WRN] slow request 30.960180 seconds old, received at 2017-06-06 09:00:54.286300: osd_op(client.12784103.1:1465 2.b610bdf6 rbd_data.a534602ae8944a.0000000000097959 [set-alloc-hint object_size 4194304 write_size 4194304,write 3334144~860160] snapc 0=[] ondisk+write e425) currently waiting for subops from 12
2017-06-06 09:01:25.247352 osd.5 10.10.10.4:6800/2871 28 : cluster [WRN] slow request 30.295933 seconds old, received at 2017-06-06 09:00:54.950547: osd_op(client.1405111.1:128154 2.fa219ce8 rbd_data.6d0b982ae8944a.0000000000000835 [set-alloc-hint object_size 4194304 write_size 4194304,write 2039808~4096] snapc 0=[] ondisk+write e425) currently waiting for subops from 12
2017-06-06 09:01:25.247359 osd.5 10.10.10.4:6800/2871 29 : cluster [WRN] slow request 30.295817 seconds old, received at 2017-06-06 09:00:54.950662: osd_op(client.1405111.1:128164 2.3aec07e8 rbd_data.6d0b982ae8944a.0000000000004c00 [set-alloc-hint object_size 4194304 write_size 4194304,write 1114112~4096] snapc 0=[] ondisk+write e425) currently waiting for subops from 12
2017-06-06 09:01:25.247364 osd.5 10.10.10.4:6800/2871 30 : cluster [WRN] slow request 30.294122 seconds old, received at 2017-06-06 09:00:54.952357: osd_op(client.1405111.1:128165 2.3aec07e8 rbd_data.6d0b982ae8944a.0000000000004c00 [set-alloc-hint object_size 4194304 write_size 4194304,write 1179648~4096] snapc 0=[] ondisk+write e425) currently waiting for subops from 12

Das kommt im Ceph Log
 
Last edited:
würde
ceph osd pool set ceph-vm pg_num 512
und
ceph osd pool set ceph-vm pgp_num 512
was bringen und kann man diese gefahrlos im produktiven Betrieb durchführen?
 
würde
ceph osd pool set ceph-vm pg_num 512
und
ceph osd pool set ceph-vm pgp_num 512
was bringen und kann man diese gefahrlos im produktiven Betrieb durchführen?
Hi,
könnte helfen, aber bringt während rebuild wahrscheinlich auch kernel hungs. Kann man reduzieren durch max_backfills=1

Udo.

Ps. Wie viele osds/host hast du?
 
Danke Udo.
Ich habe 20 OSD Host1 8 OSD und Host2 12 OSD, ließen die Cases nicht anders zu , es sollen aber noch 2-3 weitere Cases dazu kommen mit je min 12 OSD.

Der Rebuild ist soweit durch, aber mein Ursprüngliches Problem das div andere Maschinen sich aufhängen bleibt bestehen.
Netzwerk ist nun auf 10GBit gefixt und die PG ist 512 , was auch nun mit CEPH_Health = grün bestätigt wird.
 
Danke Udo.
Ich habe 20 OSD Host1 8 OSD und Host2 12 OSD, ließen die Cases nicht anders zu , es sollen aber noch 2-3 weitere Cases dazu kommen mit je min 12 OSD.

Der Rebuild ist soweit durch, aber mein Ursprüngliches Problem das div andere Maschinen sich aufhängen bleibt bestehen.
Netzwerk ist nun auf 10GBit gefixt und die PG ist 512 , was auch nun mit CEPH_Health = grün bestätigt wird.
Hi,
nur zwei hosts?! Und wie viel replicas?

Standard (aus sehr guten Grund) ist ein Replica von 3 auf host-basis, d.h. eine Node kann ausfallen.

Irgendwelche slow warnings im log?

Udo
 
Ähm nein :) 2 Host sind derzeit zum ausführen der VMs und 2 Hosts halten das CEPH vor. für CEPH gibt es noch weitere Maschinen , mir ist schon klar das es so nicht wirklich ausfallsicher ist. Die weiteren Maschinen sind nur noch anderweitig im Einsatz.
 
Der rsync wird auf dem externen Host gestartet
rsync -avzP /quelle ssh user@ziel:/ziel


Das Netzwerk ist für die VLAN vmbr49 vmbr50 vmbr63 auf einem 1GB Managed Switch für den Externen Bereich
Intern ist das 10.10.10.0/24 für CEPH und Management auf jeweils 10GBit Switches
-----------------------------------------------------------------------
PVE Version
-----------------------------------------------------------------------
Code:
root@pve4:~# pveversion -v
proxmox-ve: 4.4-88 (running kernel: 4.4.62-1-pve)
pve-manager: 4.4-13 (running version: 4.4-13/7ea56165)
pve-kernel-4.4.35-1-pve: 4.4.35-77
pve-kernel-4.4.44-1-pve: 4.4.44-84
pve-kernel-4.4.40-1-pve: 4.4.40-82
pve-kernel-4.4.62-1-pve: 4.4.62-88
lvm2: 2.02.116-pve3
corosync-pve: 2.4.2-2~pve4+1
libqb0: 1.0.1-1
pve-cluster: 4.0-50
qemu-server: 4.0-110
pve-firmware: 1.1-11
libpve-common-perl: 4.0-95
libpve-access-control: 4.0-23
libpve-storage-perl: 4.0-76
pve-libspice-server1: 0.12.8-2
vncterm: 1.3-2
pve-docs: 4.4-4
pve-qemu-kvm: 2.7.1-4
pve-container: 1.0-100
pve-firewall: 2.0-33
pve-ha-manager: 1.0-40
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u3
lxc-pve: 2.0.7-4
lxcfs: 2.0.6-pve1
criu: 1.6.0-1
novnc-pve: 0.5-9
smartmontools: 6.5+svn4324-1~pve80
zfsutils: 0.6.5.9-pve15~bpo80
openvswitch-switch: 2.6.0-2
ceph: 10.2.7-1~bpo80+1
----------------------------------------------------------------------
VM die das Problem verursacht
----------------------------------------------------------------------
Code:
root@pve4:~# cat /etc/pve/qemu-server/211.conf
bootdisk: virtio0
cores: 2
cpu: Penryn
ide2: none,media=cdrom
memory: 4096
name: VM211
net0: virtio=F2:75:0B:4D:BF:B3,bridge=vmbr50
numa: 0
ostype: l26
scsihw: virtio-scsi-pci
smbios1: uuid=604fbfee-bba0-4098-a68e-d9a3672a455b
sockets: 1
virtio0: ceph-vm:vm-211-disk-1,size=16G
virtio1: ceph-vm:vm-211-disk-2,cache=writeback,size=3000G
-----------------------------------------------------------------------
CEPH OSD TREE
-----------------------------------------------------------------------
Code:
root@pve4:~# ceph osd tree
ID WEIGHT   TYPE NAME      UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 36.26662 root default
-2 23.57143     host pve3
11  1.81319         osd.11      up  1.00000          1.00000
12  1.81319         osd.12      up  1.00000          1.00000
13  1.81319         osd.13      up  1.00000          1.00000
14  1.81319         osd.14      up  1.00000          1.00000
15  1.81319         osd.15      up  1.00000          1.00000
16  1.81319         osd.16      up  1.00000          1.00000
17  1.81319         osd.17      up  1.00000          1.00000
18  1.81319         osd.18      up  1.00000          1.00000
19  1.81319         osd.19      up  1.00000          1.00000
20  1.81319         osd.20      up  1.00000          1.00000
21  1.81319         osd.21      up  1.00000          1.00000
 0  1.81319         osd.0       up  1.00000          1.00000
 1  1.81319         osd.1       up  1.00000          1.00000
-3 12.69519     host pve4
 4  1.81360         osd.4       up  1.00000          1.00000
 5  1.81360         osd.5       up  1.00000          1.00000
 6  1.81360         osd.6       up  1.00000          1.00000
 7  1.81360         osd.7       up  1.00000          1.00000
 8  1.81360         osd.8       up  1.00000          1.00000
 9  1.81360         osd.9       up  1.00000          1.00000
10  1.81360         osd.10      up  1.00000          1.00000
-----------------------------------------------------------------------
CEPH DF
-----------------------------------------------------------------------
Code:
root@pve4:~# ceph df
GLOBAL:
    SIZE       AVAIL      RAW USED     %RAW USED
    37137G     36074G        1062G          2.86
POOLS:
    NAME        ID     USED     %USED     MAX AVAIL     OBJECTS
    ceph-vm     2      533G      2.92        17746G      140607
-----------------------------------------------------------------------
ceph.conf
-----------------------------------------------------------------------
Code:
root@pve4:~# cat /etc/pve/ceph.conf
[global]
         auth client required = cephx
         auth cluster required = cephx
         auth service required = cephx
         cluster network = 10.10.10.0/24
         filestore xattr use omap = true
         fsid = 393e3182-c345-4d2b-a746-42999849e3e3
         keyring = /etc/pve/priv/$cluster.$name.keyring
         osd journal size = 5120
         osd pool default min size = 1
         public network = 10.10.10.0/24

[osd]
         keyring = /var/lib/ceph/osd/ceph-$id/keyring

[mon.3]
         host = pve4
         mon addr = 10.10.10.4:6789

[mon.0]
         host = pve1
         mon addr = 10.10.10.1:6789

[mon.2]
         host = pve3
         mon addr = 10.10.10.3:6789

[mon.1]
         host = pve2
         mon addr = 10.10.10.2:6789
-----------------------------------------------------------------------
storage.conf
-----------------------------------------------------------------------
Code:
root@pve4:~# cat /etc/pve/storage.cfg
dir: local
        path /var/lib/vz
        content vztmpl,backup,iso

lvmthin: local-lvm
        vgname pve
        thinpool data
        content rootdir,images

rbd: ceph-vm
        monhost 10.10.10.1 10.10.10.2 10.10.10.3 10.10.10.4
        pool ceph-vm
        content images
        username admin
        krbd 1
-----------------------------------------------------------------------
corosync.conf
-----------------------------------------------------------------------
Code:
root@pve4:~# cat /etc/corosync/corosync.conf
logging {
  debug: off
  to_syslog: yes
}

nodelist {
  node {
    name: pve4
    nodeid: 4
    quorum_votes: 1
    ring0_addr: pve4
  }

  node {
    name: pve2
    nodeid: 2
    quorum_votes: 1
    ring0_addr: pve2
  }

  node {
    name: pve1
    nodeid: 1
    quorum_votes: 1
    ring0_addr: pve1
  }

  node {
    name: pve3
    nodeid: 3
    quorum_votes: 1
    ring0_addr: pve3
  }

}

quorum {
  provider: corosync_votequorum
}

totem {
  cluster_name: cencloud
  config_version: 4
  ip_version: ipv4
  secauth: on
  version: 2
  interface {
    bindnetaddr: 10.10.10.1
    ringnumber: 0
  }

}
-----------------------------------------------------------------------
crush map / wurde nicht geändert
-----------------------------------------------------------------------
Code:
# begin crush map
tunable choose_local_tries 0
tunable choose_local_fallback_tries 0
tunable choose_total_tries 50
tunable chooseleaf_descend_once 1
tunable chooseleaf_vary_r 1
tunable straw_calc_version 1

# devices
device 0 osd.0
device 1 osd.1
device 2 device2
device 3 device3
device 4 osd.4
device 5 osd.5
device 6 osd.6
device 7 osd.7
device 8 osd.8
device 9 osd.9
device 10 osd.10
device 11 osd.11
device 12 osd.12
device 13 osd.13
device 14 osd.14
device 15 osd.15
device 16 osd.16
device 17 osd.17
device 18 osd.18
device 19 osd.19
device 20 osd.20
device 21 osd.21

# types
type 0 osd
type 1 host
type 2 chassis
type 3 rack
type 4 row
type 5 pdu
type 6 pod
type 7 room
type 8 datacenter
type 9 region
type 10 root

# buckets
host pve3 {
    id -2        # do not change unnecessarily
    # weight 23.571
    alg straw
    hash 0    # rjenkins1
    item osd.11 weight 1.813
    item osd.12 weight 1.813
    item osd.13 weight 1.813
    item osd.14 weight 1.813
    item osd.15 weight 1.813
    item osd.16 weight 1.813
    item osd.17 weight 1.813
    item osd.18 weight 1.813
    item osd.19 weight 1.813
    item osd.20 weight 1.813
    item osd.21 weight 1.813
    item osd.0 weight 1.813
    item osd.1 weight 1.813
}
host pve4 {
    id -3        # do not change unnecessarily
    # weight 12.695
    alg straw
    hash 0    # rjenkins1
    item osd.4 weight 1.814
    item osd.5 weight 1.814
    item osd.6 weight 1.814
    item osd.7 weight 1.814
    item osd.8 weight 1.814
    item osd.9 weight 1.814
    item osd.10 weight 1.814
}
root default {
    id -1        # do not change unnecessarily
    # weight 36.267
    alg straw
    hash 0    # rjenkins1
    item pve3 weight 23.571
    item pve4 weight 12.695
}

# rules
rule replicated_ruleset {
    ruleset 0
    type replicated
    min_size 1
    max_size 10
    step take default
    step chooseleaf firstn 0 type host
    step emit
}

# end crush map
-----------------------------------------------------------------------
Network
-----------------------------------------------------------------------
Code:
auto lo
iface lo inet loopback

iface eth0 inet manual

auto eth1
iface eth1 inet manual

iface eth2 inet manual

iface eth3 inet manual

auto bond0
iface bond0 inet manual
        slaves eth0 eth2 eth3
        bond_miimon 100
        bond_mode 802.3ad

auto bond1
iface bond1 inet manual
        slaves eth1
        bond_miimon 100
        bond_mode 802.3ad

auto bond1.49
iface bond1.49 inet manual
        vlan-raw-device bond1

auto bond1.50
iface bond1.50 inet manual
        vlan-raw-device bond1

auto bond1.63
iface bond1.63 inet manual
        vlan-raw-device bond1

auto vmbr0
iface vmbr0 inet static
        address  10.10.10.4
        netmask  255.255.255.0
        gateway  10.10.10.254
        bridge_ports bond0
        bridge_stp off
        bridge_fd 0

auto vmbr49
iface vmbr49 inet manual
        bridge_ports bond1.49
        bridge_stp off
        bridge_fd 0

auto vmbr50
iface vmbr50 inet manual
        bridge_ports bond1.50
        bridge_stp off
        bridge_fd 0

auto vmbr63
iface vmbr63 inet manual
        bridge_ports bond1.63
        bridge_stp off
        bridge_fd 0
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!