Recent content by Sascha72036

  1. S

    48 node pve cluster

    We have one Dual 10G NIC each node. The 10G NIC is shared between corosync & ceph traffic (seperated in two different vlans). The floods start for no apparent reason. With sctp instead of knet we have no more floods but after we restart corosync, some NICs in our cluster are resetting due to tx...
  2. S

    Alternative to corosync for large clusters

    We have exactly the same problem with our 48 node cluster. Some nodes start udp floods. As a result the 10G nics from other nodes entered in a blocking state. We tried sctp, but thats not the solution. Have you found a way to run corosync stably in large clusters?
  3. S

    48 node pve cluster

    Hello everyone, we're running a 48 node pve cluster with this setup: AMD EPYC 7402P, 512GB Memory, Intel X520-DA2 or Mellanox Connect X3 NIC, Ceph Pool with only NVMe, 2x 10Gbit/s interfaces (for cluster traffic) + 2x 1G (for public traffic). As a few others have recently reported in the forum...
  4. S

    high steal time on amd epyc-Rome

    Hello everyone, I use the AMD EPYC 7402P to virtualize with Proxmox. The problem is that from around 45% host-CPU usage the steal time on the VMs increases to more than 5%. Every VM is started with the CPU flag "host." root@VM24:~# cat /proc/cpuinfo processor : 0 vendor_id ...
  5. S

    corosync udp flood

    We switched the transport to sctp. After that the problems with corosync flood in the cluster no longer occurred. A few days ago we upgraded all servers to the latest pveversion. During the upgrade process, however, the network cards suddenly switched off on random hosts again with the same...
  6. S

    corosync udp flood

    Hello, we are running a 42 node proxmox cluster with ceph. Our nodes are connected via Intel X520-DA2 (2x 10G) to two seperated Arista 7050QX Switches. Corosync and Ceph are seprated in two different vlans. The normal traffic of the VMs run over the onboard NIC. We have big problems with...
  7. S

    Probleme mit proxmox Network

    Hier noch ein Auszug aus der syslog: Mar 20 10:15:37 prox39 kernel: [1843705.835893] vmbr0: port 5(tap148i0) entered disabled state Mar 20 10:15:37 prox39 pmxcfs[1711]: [status] notice: received log Mar 20 10:15:37 prox39 pmxcfs[1711]: [status] notice: received log Mar 20 10:15:37 prox39...
  8. S

    Probleme mit proxmox Network

    Hallo, wir betreiben ein großes Cluster mit 41 Nodes (Prozessor: AMD EPYC 7402P) sowie separate 2x 10G NIC für ceph + Cluster-Traffic (corosync etc). Cluster, Ceph usw ist alles abgetrennt vom normalen Traffic der VMs. Wir nehmen zwei redundante Switche (Arista) für ceph und zwei redundante...
  9. S

    EXT4-fs Error with Raid 10

    Hello, have the similar issue. Is there a recommendation from Proxmox how to rescue the VM data? Best regards, S. Gericke
  10. S

    LXC ARP issue

    Hello, i run in my cluster several LXC containers with more than one ipv4 address. Every lxc container have one ipv4 address in the proxmox "network" configuration. We add each further in the interfaces file via: up ip addr add 45.132.89.134/24 dev eth0 My problem is: It seems as if the LXC...
  11. S

    HA Deaktivieren

    Vielen Dank für die Antwort. Alle Server stehen auf "idle". Das Problem ist, dass eine Node neustartet, wenn eine andere Node aus dem Cluster rebootet und wieder hochfährt. Ich habe HA dafür verantwortlich gemacht, kann es noch andere Ursachen geben, die das verursachen? Habe eine syslog anbei...
  12. S

    Node reboot causes other node reboot when in cluster

    In my cluster there are 30 nodes. Maybe my issue is caused by other problems. Thank you for your answer.
  13. S

    Proxmox Cluster Probleme

    Hallo, ich habe ein Problem mit meinem Proxmox Cluster. Wenn eine Node neustartet, restartet eine andere Node aus dem Cluster, sobald die neugestartete Node wieder hochgefahren ist. Alle Nodes werden dann nach kurzer Zeit im Cluster auch als "Offline" oder mit einem grauen Fragezeichen...
  14. S

    Node reboot causes other node reboot when in cluster

    I have exactly the same issue. Did you find a solution? I want to disable HA in my cluster. Disabling HA for the VM and deleting it from "ressources" is not enough.
  15. S

    HA Deaktivieren

    Hello, is it possible to deactivate HA in the entire cluster? Best regards and thank you for your help. -- german: Hallo, da ich Schwierigkeiten mit HA im Cluster habe, möchte ich es gerne im gesamten Proxmox Cluster deaktivieren. Es wird keine VM mehr über HA gemanaged. Ist es möglich, HA...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!