[SOLVED] After PVE node address change, "Failed to connect to server" is received on console access

cosmos

Renowned Member
Apr 1, 2013
138
11
83
Hello,

I'm using the free version of PVE for some time now on a single node. Our network addressing changed recently so I had to change its ip address. I did so from the network interface. That happened about a week ago.

Today I had a VM that did not respond even though the web interface of PVE indicated it as running. So I tried to view it from the console but received an error "failed to connect to server". I tried to view the consoles of other VMs, but received the same error.

I have 2 windows VMs running and a couple of Linux-based ones. In all cases I could not connect via console. The other VMs are fully operational (for example, I can connect just fine to my other Windows 7 box over RDP).

Any idea on where and what I should look for?

I don't have any clustering setups etc on the box. No ZFS if that matters.

EDIT: I should add that I can not open my LXC container as well, regardless of the method used (novnc, SPICE, xterm.js).

Details:
Code:
root@pve-1:~# pveversion
pve-manager/6.3-6/2184247e (running kernel: 5.4.103-1-pve)

root@pve-1:~# pveversion -v
proxmox-ve: 6.3-1 (running kernel: 5.4.103-1-pve)
pve-manager: 6.3-6 (running version: 6.3-6/2184247e)
pve-kernel-5.4: 6.3-7
pve-kernel-helper: 6.3-7
pve-kernel-5.3: 6.1-6
pve-kernel-5.4.103-1-pve: 5.4.103-1
pve-kernel-5.4.101-1-pve: 5.4.101-1
pve-kernel-5.4.78-2-pve: 5.4.78-2
pve-kernel-5.3.18-3-pve: 5.3.18-3
pve-kernel-5.3.13-2-pve: 5.3.13-2
ceph-fuse: 14.2.16-pve1
corosync: 3.1.0-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: residual config
ifupdown2: 3.0.0-1+pve3
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.20-pve1
libproxmox-acme-perl: 1.0.7
libproxmox-backup-qemu0: 1.0.3-1
libpve-access-control: 6.1-3
libpve-apiclient-perl: 3.1-3
libpve-common-perl: 6.3-5
libpve-guest-common-perl: 3.1-5
libpve-http-server-perl: 3.1-1
libpve-storage-perl: 6.3-7
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.6-2
lxcfs: 4.0.6-pve1
novnc-pve: 1.1.0-1
proxmox-backup-client: 1.0.10-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.4-6
pve-cluster: 6.2-1
pve-container: 3.3-4
pve-docs: 6.3-1
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-3
pve-firmware: 3.2-2
pve-ha-manager: 3.1-1
pve-i18n: 2.2-2
pve-qemu-kvm: 5.2.0-3
pve-xtermjs: 4.7.0-3
qemu-server: 6.3-8
smartmontools: 7.2-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 2.0.3-pve2
 
Last edited:
(bump) Some help here please? No access to all VMs via noVNC, but the critical thing here is that I can not access one Windows VM that seems running, is pingable but is otherwise unresponsive (no CIFS, no RDP)!