Using SPICE in a cluster "Unable to connect to the graphic server"

ngbatnicdotmil

New Member
Oct 13, 2022
11
1
3
Hi There!,

I have two PVE nodes (P1 and P2),
When I'm logged in on P1 node, and try to access a VM on P2 node with virt-viewer I get "Unable to connect to the graphic server C:\Users\user\Downloads\9_NoK9Fd.vv" error message,
Nodes are on different networks, but they have a 10G Ethernet connection for cluster needs.

I'm not sure if there's a configuration for SPICE proxying, and whether this is an expected behaviour.
 
Hello,

Does the virt-viewer work on P1 node?

May you post the output of pveversion -v and if you have antivirus try to disable it and test the spice again, to narrow down the case.
 
Hi Moayad,

virt-viewer works fine for both nodes, if you are logged in on the same node where the VM is running,
but if you are trying to access a VM running on the other node (where you are not logged in) the error message is displayed.

proxmox-ve: 7.2-1 (running kernel: 5.15.39-3-pve)
pve-manager: 7.2-7 (running version: 7.2-7/d0dd0e85)
pve-kernel-5.15: 7.2-8
pve-kernel-helper: 7.2-8
pve-kernel-5.13: 7.1-9
pve-kernel-5.11: 7.0-10
pve-kernel-5.15.39-3-pve: 5.15.39-3
pve-kernel-5.15.35-3-pve: 5.15.35-6
pve-kernel-5.13.19-6-pve: 5.13.19-15
pve-kernel-5.13.19-1-pve: 5.13.19-3
pve-kernel-5.11.22-7-pve: 5.11.22-12
pve-kernel-5.11.22-3-pve: 5.11.22-7
pve-kernel-5.11.22-1-pve: 5.11.22-2
ceph: 16.2.9-pve1
ceph-fuse: 16.2.9-pve1
corosync: 3.1.5-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve1
libproxmox-acme-perl: 1.4.2
libproxmox-backup-qemu0: 1.3.1-1
libpve-access-control: 7.2-4
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.2-2
libpve-guest-common-perl: 4.1-2
libpve-http-server-perl: 4.1-3
libpve-storage-perl: 7.2-7
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.0-3
lxcfs: 4.0.12-pve1
novnc-pve: 1.3.0-3
openvswitch-switch: 2.15.0+ds1-2+deb11u1
proxmox-backup-client: 2.2.5-1
proxmox-backup-file-restore: 2.2.5-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.5.1
pve-cluster: 7.2-2
pve-container: 4.2-2
pve-docs: 7.2-2
pve-edk2-firmware: 3.20210831-2
pve-firewall: 4.2-5
pve-firmware: 3.5-1
pve-ha-manager: 3.4.0
pve-i18n: 2.7-2
pve-qemu-kvm: 6.2.0-11
pve-xtermjs: 4.16.0-1
qemu-server: 7.2-3
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.7.1~bpo11+1
vncterm: 1.7-1
zfsutils-linux: 2.1.5-pve1

proxmox-ve: 7.2-1 (running kernel: 5.15.39-3-pve)
pve-manager: 7.2-7 (running version: 7.2-7/d0dd0e85)
pve-kernel-5.15: 7.2-8
pve-kernel-helper: 7.2-8
pve-kernel-5.15.39-3-pve: 5.15.39-3
pve-kernel-5.15.30-2-pve: 5.15.30-3
ceph-fuse: 15.2.16-pve1
corosync: 3.1.5-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve1
libproxmox-acme-perl: 1.4.2
libproxmox-backup-qemu0: 1.3.1-1
libpve-access-control: 7.2-4
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.2-2
libpve-guest-common-perl: 4.1-2
libpve-http-server-perl: 4.1-3
libpve-storage-perl: 7.2-7
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.0-3
lxcfs: 4.0.12-pve1
novnc-pve: 1.3.0-3
openvswitch-switch: 2.15.0+ds1-2+deb11u1
proxmox-backup-client: 2.2.5-1
proxmox-backup-file-restore: 2.2.5-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.5.1
pve-cluster: 7.2-2
pve-container: 4.2-2
pve-docs: 7.2-2
pve-edk2-firmware: 3.20210831-2
pve-firewall: 4.2-5
pve-firmware: 3.5-1
pve-ha-manager: 3.4.0
pve-i18n: 2.7-2
pve-qemu-kvm: 6.2.0-11
pve-xtermjs: 4.16.0-1
qemu-server: 7.2-3
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.7.1~bpo11+1
vncterm: 1.7-1
zfsutils-linux: 2.1.5-pve1

Cluster information
-------------------
Name: PVE-Yer-Cluster
Config Version: 2
Transport: knet
Secure auth: on

Quorum information
------------------
Date: Fri Oct 14 13:59:26 2022
Quorum provider: corosync_votequorum
Nodes: 2
Node ID: 0x00000002
Ring ID: 1.18a
Quorate: Yes

Votequorum information
----------------------
Expected votes: 2
Highest expected: 2
Total votes: 2
Quorum: 2
Flags: Quorate

Membership information
----------------------
Nodeid Votes Name
0x00000001 1 10.0.0.251
0x00000002 1 10.0.0.252 (local)
 
Have you tired pvecm updatecerts --force?
Yep, same error pops up again

auto lo
iface lo inet loopback

auto enp70s0
iface enp70s0 inet manual
#top 2.5G

auto enp68s0
iface enp68s0 inet manual
#bottom 10G

auto vmbr0
iface vmbr0 inet static
address 192.168.6.251/24
gateway 192.168.6.1
bridge-ports enp70s0
bridge-stp off
bridge-fd 0
#mainnet

auto vmbr1
iface vmbr1 inet static
address 10.0.0.251/24
bridge-ports enp68s0
bridge-stp off
bridge-fd 0
#Cluster Bridge


auto lo
iface lo inet loopback

auto enp70s0
iface enp70s0 inet manual
#top 2.5G

auto enp68s0
iface enp68s0 inet manual
#bottom 10G

auto vmbr0
iface vmbr0 inet static
address 192.168.1.251/24
gateway 192.168.1.1
bridge-ports enp70s0
bridge-stp off
bridge-fd 0
#mainnet

auto vmbr1
iface vmbr1 inet static
address 10.0.0.252/24
bridge-ports enp68s0
bridge stp off
bridge-fd 0
#Cluster Bridge
 
I am having the same problem, I have two nodes in a cluster and I get the same message.
I edited the host files on both nodes and added the other node. Same problem.

I think it might be because I use traefik to reverse proxy Proxmox, and looking into that.
 
Last edited: