Search results

  1. [SOLVED] Why do KNET chose ring with higher priority instead of lower one (as said in manual?)

    Here is an answer... https://github.com/corosync/corosync/commit/0a323ff2ed0f2aff9cb691072906e69cb96ed662 PVE wiki should be get updated accordingly Dumn corosync...
  2. [SOLVED] Why do KNET chose ring with higher priority instead of lower one (as said in manual?)

    Could anyone explain why do corosync (KNET) choose best link with the highest priority instead of the lowest one (as written in PVE wiki)? Very confused with corosync3 indeed... quorum { provider: corosync_votequorum } totem { cluster_name: amarao-cluster config_version: 20 interface...
  3. PVE 5.4-11 + Corosync 3.x: major issues

    Another observation is that in my setups only nodes with no swap (zfs as root and NFS share as datastore) and vm.swappiness=0 in sysctl.conf are affected I do remember the unresolved issue with PVE 5.x where swap has been used even with vm.swappiness=0 by pve process. Couldn't this be the case...
  4. PVE 5.4-11 + Corosync 3.x: major issues

    Another hang which breaks even NFS connection and trace linux kernel
  5. PVE 5.4-11 + Corosync 3.x: major issues

    Could the problem be related to jumbo frames and/or dual ring configuration? I'm facing the same issue - corosync randomly hangs on different nodes. I've two rings 10Gbe + 1Gbe with mtu = 9000 on both nets
  6. PVE 6 cluster nodes randomly hangs (10gbe network down)

    Don't know how this could be related but following was observed during the boot [Wed Sep 11 04:37:27 2019] ACPI: Using IOAPIC for interrupt routing [Wed Sep 11 04:37:27 2019] HEST: Table parsing has been initialized. [Wed Sep 11 04:37:27 2019] PCI: Using host bridge windows from ACPI; if...
  7. PVE 6 cluster nodes randomly hangs (10gbe network down)

    There was no unsuspected activity on that node at the time of hanging
  8. PVE 6 cluster nodes randomly hangs (10gbe network down)

    root@pve-node3:~# dmesg -T | grep Intel [Sun Sep 8 04:22:18 2019] Intel GenuineIntel [Sun Sep 8 04:22:19 2019] smpboot: CPU0: Intel(R) Xeon(R) CPU E5-2680 0 @ 2.70GHz (family: 0x6, model: 0x2d, stepping: 0x7) [Sun Sep 8 04:22:19 2019] Performance Events: PEBS fmt1+, SandyBridge events...
  9. PVE 6 cluster nodes randomly hangs (10gbe network down)

    root@pve-node3:~# lspci 00:00.0 Host bridge: Intel Corporation Xeon E5/Core i7 DMI2 (rev 07) 00:01.0 PCI bridge: Intel Corporation Xeon E5/Core i7 IIO PCI Express Root Port 1a (rev 07) 00:02.0 PCI bridge: Intel Corporation Xeon E5/Core i7 IIO PCI Express Root Port 2a (rev 07) 00:03.0 PCI bridge...
  10. PVE 6 cluster nodes randomly hangs (10gbe network down)

    root@pve-node3:~# uname -a Linux pve-node3 5.0.21-1-pve #1 SMP PVE 5.0.21-2 (Wed, 28 Aug 2019 15:12:18 +0200) x86_64 GNU/Linux root@pve-node3:~# pveversion -v proxmox-ve: 6.0-2 (running kernel: 5.0.21-1-pve) pve-manager: 6.0-7 (running version: 6.0-7/28984024) pve-kernel-5.0: 6.0-7...
  11. PVE 6 cluster nodes randomly hangs (10gbe network down)

    root@pve-node3:~# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host...
  12. PVE 6 cluster nodes randomly hangs (10gbe network down)

    [Sun Sep 8 04:23:20 2019] fwbr143i0: port 2(tap143i0) entered disabled state [Sun Sep 8 04:23:20 2019] fwbr143i0: port 2(tap143i0) entered blocking state [Sun Sep 8 04:23:20 2019] fwbr143i0: port 2(tap143i0) entered forwarding state [Sun Sep 8 07:25:56 2019] perf: interrupt took too long...
  13. PVE 6 cluster nodes randomly hangs (10gbe network down)

    I've noticed that after installing PVE 6.x ckuster with 10Gb net for intercluster and storage (NFS) communications cluster nodes randomly hangs - still available through ethernet (1Gbe) nework but NOT accesible via main 10Gbe, so neither cluster nor storage are availible Yesterday it happened...
  14. BlueFS spillover detected on 30 OSD(s)

    I agree with this assumption. The one should at least be warn before and upgrade. I'm facing the same issue with 50+ OSDs and have no idea how to sort it out I don't have another cluster to play with and found not much info how correctly destroy all OSDs on single node, wipe all disks (as well...
  15. Multipath iSCSI /dev/mapper device is not created (Proxmox 6)

    Check your multipath.conf file. Seems one more “}” bracket is missing at the end
  16. [SOLVED] Warning after sucessfull upgrade to PVE 6.x + Ceph Nautilus

    After a successful upgrade from PVE 5 to PVE 6 with Ceph the warning message "Legacy BlueStore stats reporting detected on ..." appears on Ceph monitoring panel Have I missed something during an upgrade or it's an expected behavior? Thanks in advance
  17. lacp bond wihout speed increase

    Single connection will be always limited to the speed of single interface. LACP bond increase total throughput (read as sum of all connections).
  18. Nodes unreachables in PVE Cluster

    Mine configs: root@pve2:~# cat /etc/network/interfaces # network interface settings; autogenerated # Please do NOT modify this file directly, unless you know what # you're doing. # # If you want to manage parts of the network configuration manually, # please utilize the 'source' or...
  19. Nodes unreachables in PVE Cluster

    I'm facing almost the same issue with couple of setups after an upgrade to 5.4. Could you show network config and lspci output. Probably we could find out something in common

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!