Search results

  1. M

    Cluster reset when one node can only reached over corosync ring0 -- configuration problem?

    Hey All, I got a complete cluster reset (watchdog based reset of all nodes) in the following scenario. Got a cluster of 7 hosts. corosync has 2 rings: ring0 network 192.168.xx.n/24 using a dedicated cupper switch rint1 network 192.168.yy.n/24 using a vlan in a 10g fiber. Here a part of...
  2. M

    Is it possible to add a Version 7.2 node to version 6 cluster before upgrade

    Hi all. We are in a process of upgrading an extenting our 4 node cluster. When we setup a new node, is it possible to add this node to the existing versin 6.4 cluster. We have ceph version 15.2 running on the existing cluster. So first installng ceph version 15 on the version 7 node should...
  3. M

    reboot of all cluster nodes when corosync is restarted on specific member

    Hey all, I observed a strange reboot off all my cluster nodes as soon as on one specific host cororsync is restarted or this host rebooted. I have 7 hosts in one cluster Corosync has 2 links configured. ring0 is on a separate network on separate switch. ring1 is shared as VLAN over 10G fiber...
  4. M

    Proxmox_special_agent (check_mk 2.0) ... json response

    Ich versuche Proxmox VE 6.4 Cluster mit einem upgegradeten check_mk der Version 2.0 zu monitoren. Check_mk 2.0 liefert einen special Agent, der die Proxmox API nutzt. Auf einem Cluster (beide Cluster haben den gleichen Patch Stand) bekomme ich auch brauchbare Antworten aus der API: Auf dem...
  5. M

    watchdog timeout on slow NFS backups

    Hi all, Since Version 6.0 up to now Version 6.2 we see the follwoing behavior running backups over WAN to NFS. We have a 8 hosts cluster (all HP DL380 G7 up to W9) runnung fine. When doing backups over a WAN connection to a QNAP we first we see al lot of this May 21 22:39:03 pve56 kernel...
  6. M

    Kernel panic on VM's since Kernel pve-kernel-5.0.21-4-pve: 5.0.21-8

    Since we updated to Kernel pve-kernel-5.0.21-4-pve: 5.0.21-8 we can not start any VM's on this host. Migrated VM's do have kernel panic as like this: This are the package versions we are running on the host: proxmox-ve: 6.0-2 (running kernel: 5.0.21-3-pve) pve-manager: 6.0-11 (running...
  7. M

    Problem to assign vlan to vmbridge (Proxmox VE 6) / pve-bridge error 512

    We do have the following problem. Have a network setup with 3 vmbridge interfaces für different VLAN blocks. This is due to usage of ibm-Blades where we can not bond the nics. The setup is working on the blades. This is the interfaces file on the blades: auto lo iface lo inet loopback...
  8. M

    VE 4.0 Kernel Panic on HP Proliant servers

    We have 2 labs setup with Proxmox VE 4.0 from latest ISO Download. In one lab we have HP proliant servers with massive kernel panic on Module hpwdt.ko. Unfortunately we do not have the trace due to HP's dammed ILO :-( but I will give mor Info when catched it up. We have a ceph cluster with 3...