Hello community!
First of all I have to thank fmgoodman again for writing such a detailed explanation of his error, described in his post. I come to find that I have a very, very similar problem as his cluster.
Having five hosts in a cluster (host01..05) I can access the GUIs of each of them. When trying to access a console (all standard settings, nothing fancy configured) of a VM from another host, it doesn't work and I can even see an error message in
Line 678 of the file "AnyEvent.pm" reads:
However when I try the same stunt with a LXC running on another node, it works miraculously.
What I have done after reading the post from fmgoodman:
pveversion -v:
pvecm status:
corosync-cfgtool -n:
Due to time running out currently (work is calling) I did not do any other commands or else. Is there anything else I can do at this point or did I miss a critical information? I was starting to read about the spiceproxy process, that's why I added the corresponding tags to my post.
In any case, thanks in advance for your consideration!
Best regards.
First of all I have to thank fmgoodman again for writing such a detailed explanation of his error, described in his post. I come to find that I have a very, very similar problem as his cluster.
Having five hosts in a cluster (host01..05) I can access the GUIs of each of them. When trying to access a console (all standard settings, nothing fancy configured) of a VM from another host, it doesn't work and I can even see an error message in
journalctl -f (where 115 is the ID of the VM running on another host):Apr 15 08:52:12 proxmox141 pvedaemon[280615]: <root@pam> starting task UPID:proxmox141:00046F1C:237C01C9:69DF359C:vncproxy:115:root@pam:Apr 15 08:52:12 proxmox141 pvedaemon[290588]: starting vnc proxy UPID:proxmox141:00046F1C:237C01C9:69DF359C:vncproxy:115:root@pam:Apr 15 08:52:13 proxmox141 pveproxy[290382]: Clearing outdated entries from certificate cacheApr 15 08:52:13 proxmox141 pveproxy[290382]: Use of uninitialized value $statuscode in concatenation (.) or string at /usr/share/perl5/PVE/APIServer/AnyEvent.pm line 678.Apr 15 08:52:15 proxmox141 pvedaemon[280615]: <root@pam> end task UPID:proxmox141:00046F1C:237C01C9:69DF359C:vncproxy:115:root@pam: OKLine 678 of the file "AnyEvent.pm" reads:
Code:
$self->dprint("websocket received close. status code: '$statuscode'");
However when I try the same stunt with a LXC running on another node, it works miraculously.
Journactl -f tells me:Apr 15 08:54:32 proxmox141 pvedaemon[291162]: starting lxc termproxy UPID:proxmox141:0004715A:237C384C:69DF3628:vncproxy:200:root@pam:Apr 15 08:54:32 proxmox141 pvedaemon[280613]: <root@pam> starting task UPID:proxmox141:0004715A:237C384C:69DF3628:vncproxy:200:root@pam:Apr 15 08:54:32 proxmox141 pvedaemon[280615]: <root@pam> successful auth for user 'root@pam'What I have done after reading the post from fmgoodman:
- made sure that ssh connections as user root inbetween all nodes is working
- refreshed the certificates by executing
pvecm updatecerts -fon all nodes - restarted the core processes
pveproxyandpvedaemonon all nodes
pveversion -v:
proxmox-ve: 9.1.0 (running kernel: 6.17.4-2-pve)pve-manager: 9.1.7 (running version: 9.1.7/16b139a017452f16)proxmox-kernel-helper: 9.0.4proxmox-kernel-6.17: 6.17.13-2proxmox-kernel-6.17.13-2-pve-signed: 6.17.13-2proxmox-kernel-6.17.13-1-pve-signed: 6.17.13-1proxmox-kernel-6.17.4-2-pve-signed: 6.17.4-2proxmox-kernel-6.8: 6.8.12-16proxmox-kernel-6.8.12-16-pve-signed: 6.8.12-16proxmox-kernel-6.8.4-2-pve-signed: 6.8.4-2ceph-fuse: 19.2.3-pve1corosync: 3.1.10-pve2criu: 4.1.1-1frr-pythontools: 10.4.1-1+pve1ifupdown2: 3.3.0-1+pmx12intel-microcode: 3.20251111.1~deb13u1ksm-control-daemon: 1.5-1libjs-extjs: 7.0.0-5libproxmox-acme-perl: 1.7.1libproxmox-backup-qemu0: 2.0.2libproxmox-rs-perl: 0.4.1libpve-access-control: 9.0.6libpve-apiclient-perl: 3.4.2libpve-cluster-api-perl: 9.1.1libpve-cluster-perl: 9.1.1libpve-common-perl: 9.1.9libpve-guest-common-perl: 6.0.2libpve-http-server-perl: 6.0.5libpve-network-perl: 1.2.5libpve-rs-perl: 0.11.4libpve-storage-perl: 9.1.1libspice-server1: 0.15.2-1+b1lvm2: 2.03.31-2+pmx1lxc-pve: 6.0.5-4lxcfs: 6.0.4-pve1novnc-pve: 1.6.0-3proxmox-backup-client: 4.1.5-1proxmox-backup-file-restore: 4.1.5-1proxmox-backup-restore-image: 1.0.0proxmox-firewall: 1.2.1proxmox-kernel-helper: 9.0.4proxmox-mail-forward: 1.0.2proxmox-mini-journalreader: 1.6proxmox-offline-mirror-helper: 0.7.3proxmox-widget-toolkit: 5.1.9pve-cluster: 9.1.1pve-container: 6.1.2pve-docs: 9.1.2pve-edk2-firmware: 4.2025.05-2pve-esxi-import-tools: 1.0.1pve-firewall: 6.0.4pve-firmware: 3.18-2pve-ha-manager: 5.1.3pve-i18n: 3.7.0pve-qemu-kvm: 10.1.2-7pve-xtermjs: 5.5.0-3qemu-server: 9.1.6smartmontools: 7.4-pve1spiceterm: 3.4.1swtpm: 0.8.0+pve3vncterm: 1.9.1zfsutils-linux: 2.4.1-pve1pvecm status:
Cluster information-------------------Name: ProxmoxClusterConfig Version: 5Transport: knetSecure auth: onQuorum information------------------Date: Wed Apr 15 08:58:19 2026Quorum provider: corosync_votequorumNodes: 5Node ID: 0x00000002Ring ID: 1.483Quorate: YesVotequorum information----------------------Expected votes: 5Highest expected: 5Total votes: 5Quorum: 3Flags: QuorateMembership information---------------------- Nodeid Votes Name0x00000001 1 192.168.253.1400x00000002 1 192.168.253.141 (local)0x00000003 1 192.168.253.1420x00000004 1 192.168.253.1430x00000005 1 192.168.253.144corosync-cfgtool -n:
Local node ID 2, transport knetnodeid: 1 reachable LINK: 0 udp (192.168.253.141->192.168.253.140) enabled connected mtu: 8885nodeid: 3 reachable LINK: 0 udp (192.168.253.141->192.168.253.142) enabled connected mtu: 8885nodeid: 4 reachable LINK: 0 udp (192.168.253.141->192.168.253.143) enabled connected mtu: 8885nodeid: 5 reachable LINK: 0 udp (192.168.253.141->192.168.253.144) enabled connected mtu: 8885Due to time running out currently (work is calling) I did not do any other commands or else. Is there anything else I can do at this point or did I miss a critical information? I was starting to read about the spiceproxy process, that's why I added the corresponding tags to my post.
In any case, thanks in advance for your consideration!
Best regards.