Search results

  1. Is it possible to add a Version 7.2 node to version 6 cluster before upgrade

    Hi all. We are in a process of upgrading an extenting our 4 node cluster. When we setup a new node, is it possible to add this node to the existing versin 6.4 cluster. We have ceph version 15.2 running on the existing cluster. So first installng ceph version 15 on the version 7 node should...
  2. reboot of all cluster nodes when corosync is restarted on specific member

    Hi Fabian, Thanks for your reply, Viewed the bug report and likely this could be right. I will test the packages when they ,,arrive" and come back Best regards Lukas
  3. reboot of all cluster nodes when corosync is restarted on specific member

    Hey all, I observed a strange reboot off all my cluster nodes as soon as on one specific host cororsync is restarted or this host rebooted. I have 7 hosts in one cluster Corosync has 2 links configured. ring0 is on a separate network on separate switch. ring1 is shared as VLAN over 10G fiber...
  4. Proxmox_special_agent (check_mk 2.0) ... json response

    Hey All, Das schein dann doch ein Bug in Check_mk. Alle Cluster können korrekt abgefragt werden. Sobald aber in einem Cluster ein Host down ist (auch beabsichtigt), läuft der special Agent auf den JSONDecode Error. Sobald alles hosts wieder Up, kommt ein korrekter Output. Gruss Lukas
  5. Proxmox_special_agent (check_mk 2.0) ... json response

    OK,.... was ich gefunden habe ist, dass ich beide cluster per curl erreichen kann und abfragen: z.B. /nodes Auf den ,,laufenden Cluster", der per cmk special-agent abgefragt werden kann.... curl --insecure --cookie "$(<cookie)" https://10.1.0.11:8006/api2/json/nodes/...
  6. Proxmox_special_agent (check_mk 2.0) ... json response

    Leider nur ein: json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0) ein mitgegebenes --debug dann noch diesen trace: Traceback (most recent call last): File "/omd/sites/mc/share/check_mk/agents/special/agent_proxmox_ve", line 10, in <module> main() File...
  7. Proxmox_special_agent (check_mk 2.0) ... json response

    Hallo Stoiko Ivanov, Danke für die Rückmeldung. cmk -d hostname fürhrt sowohl den agent check auf dem Host aus, der tadellos funktioniert, wie auch den oben beschriebenen ,,spezial Agent". Dieser fragt per https vom cmk-host den Proxmox Cluster über die Proxmox api ab. Der Aufruf erwartet...
  8. Proxmox_special_agent (check_mk 2.0) ... json response

    Ich versuche Proxmox VE 6.4 Cluster mit einem upgegradeten check_mk der Version 2.0 zu monitoren. Check_mk 2.0 liefert einen special Agent, der die Proxmox API nutzt. Auf einem Cluster (beide Cluster haben den gleichen Patch Stand) bekomme ich auch brauchbare Antworten aus der API: Auf dem...
  9. ProxMox 6.2-15 Problem with HotPlug Hard Drives

    To reply to my own: The problem disappeared afterupgrade to libpve-common-perl: 6.2-4. HTH
  10. ProxMox 6.2-15 Problem with HotPlug Hard Drives

    Same Problem here with Ceph storage on PVE 6.2.15
  11. watchdog timeout on slow NFS backups

    You are right, it's strange. CPU load is about or lower 1 when doing the backups. We have 2 NFS mounts on the cluster. One to a local QNAP which is running fine without any issues. The other is remote connected via Gateway and IPSec. This one produces the ,,not responding" messages which is...
  12. watchdog timeout on slow NFS backups

    Thanks for ypur answer ,,spirit" ! csync and backup so far: This is corosync: for ,,one" node: node { name: pve56 nodeid: 7 quorum_votes: 1 ring0_addr: 192.168.24.56 ring1_addr: 192.168.25.56 } Where 192.168.240/24 is a separate Network with dedicated switch and 192.168.25.0/24 is...
  13. watchdog timeout on slow NFS backups

    Hi all, Since Version 6.0 up to now Version 6.2 we see the follwoing behavior running backups over WAN to NFS. We have a 8 hosts cluster (all HP DL380 G7 up to W9) runnung fine. When doing backups over a WAN connection to a QNAP we first we see al lot of this May 21 22:39:03 pve56 kernel...
  14. Kernel panic on VM's since Kernel pve-kernel-5.0.21-4-pve: 5.0.21-8

    I can confirm Version pve-kernel-5.0.21-4-pve: 5.0.21-9 is working correct on 8 x Intel(R) Xeon(R) CPU E5320 So this is fixed now. Thanks so much for great work
  15. Kernel panic on VM's since Kernel pve-kernel-5.0.21-4-pve: 5.0.21-8

    I think this pve-kernel-5.0.21-4-pve cause Debian guests to reboot loop on older intel CPUs is what you are talking about. The same good old dinosaurs. We use this old hosts for testing and DMZ hosts. Having a lot of HDD's for ceph, a tape drive..... So I will follow the other discussion.
  16. pve-kernel-5.0.21-4-pve cause Debian guests to reboot loop on older intel CPUs

    We have the same here on this Kernel panic on VM's since Kernel pve-kernel-5.0.21-4-pve: 5.0.21-8 I noticed You have also a old mainboard HP G5 hardware. We also have upgraded newer hardware with no issue. Try to boot the older kernel and see if the vm's are starting angain.
  17. Kernel panic on VM's since Kernel pve-kernel-5.0.21-4-pve: 5.0.21-8

    Since we updated to Kernel pve-kernel-5.0.21-4-pve: 5.0.21-8 we can not start any VM's on this host. Migrated VM's do have kernel panic as like this: This are the package versions we are running on the host: proxmox-ve: 6.0-2 (running kernel: 5.0.21-3-pve) pve-manager: 6.0-11 (running...
  18. Problem to assign vlan to vmbridge (Proxmox VE 6) / pve-bridge error 512

    You are right. udev is the problem here. So net.ifnames=0 as grub boot parameter leads to old ethx interface names and everything is working fine. But the naming scheme mentioned in the doku ,,wiki/Network_Configuration" : ..... We currently use the following naming conventions for device...
  19. Problem to assign vlan to vmbridge (Proxmox VE 6) / pve-bridge error 512

    Yes this is on a host new installed from Proxmox VE Iso. After that we found interfaces named like ,,rename6" on a 4 port HP-network card. Setting .link files is /etc/systemd/network and updating initramfs solved this. An yes, systemd is not used by proxmox, so we also could not set...
  20. Problem to assign vlan to vmbridge (Proxmox VE 6) / pve-bridge error 512

    We do have the following problem. Have a network setup with 3 vmbridge interfaces für different VLAN blocks. This is due to usage of ibm-Blades where we can not bond the nics. The setup is working on the blades. This is the interfaces file on the blades: auto lo iface lo inet loopback...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!