We're having an issue with spice proxy after we upgraded our cluster from PVE 5.1 to 5.2.
Prior to the upgrade, we were able to log into the webUI of any node in the cluster and access a VM running on any other node in the cluster with remote-viewer (spice). After the update, we are now only able to access VMs on the Proxmox host we are logged into. If we try to connect to a VM hosted on another node in the cluster, it says "Unable to connect to the graphics server <filename>".
Output of "pveversion -v"
Everything else in the cluster is working as expected. I'm at a bit of a loss on what to try next for this issue. Any assistance is appreciated.
Thanks!
Prior to the upgrade, we were able to log into the webUI of any node in the cluster and access a VM running on any other node in the cluster with remote-viewer (spice). After the update, we are now only able to access VMs on the Proxmox host we are logged into. If we try to connect to a VM hosted on another node in the cluster, it says "Unable to connect to the graphics server <filename>".
Output of "pveversion -v"
Code:
proxmox-ve: 5.2-2 (running kernel: 4.15.18-4-pve)
pve-manager: 5.2-8 (running version: 5.2-8/fdf39912)
pve-kernel-4.15: 5.2-7
pve-kernel-4.15.18-4-pve: 4.15.18-23
pve-kernel-4.13.13-2-pve: 4.13.13-33
corosync: 2.4.2-pve5
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.0-8
libpve-apiclient-perl: 2.0-5
libpve-common-perl: 5.0-38
libpve-guest-common-perl: 2.0-17
libpve-http-server-perl: 2.0-10
libpve-storage-perl: 5.0-27
libqb0: 1.0.1-1
lvm2: 2.02.168-pve6
lxc-pve: 3.0.2+pve1-2
lxcfs: 3.0.0-1
novnc-pve: 1.0.0-2
openvswitch-switch: 2.7.0-3
proxmox-widget-toolkit: 1.0-19
pve-cluster: 5.0-30
pve-container: 2.0-26
pve-docs: 5.2-8
pve-firewall: 3.0-14
pve-firmware: 2.0-5
pve-ha-manager: 2.0-5
pve-i18n: 1.0-6
pve-libspice-server1: 0.12.8-3
pve-qemu-kvm: 2.11.2-1
pve-xtermjs: 1.0-5
qemu-server: 5.0-33
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
zfsutils-linux: 0.7.9-pve1~bpo9
Everything else in the cluster is working as expected. I'm at a bit of a loss on what to try next for this issue. Any assistance is appreciated.
Thanks!