[SOLVED] Promox 6: problem with noVNC - failed to connect to server

hermelin

Member
Sep 28, 2012
27
1
23
Hello,

after upgrade to Promox 6, I have probem with noVNC console from webbrowser (Chrome,Firefox tested). After some time I cannot connect noVNC console to VM with error "failed to connect to server". After shutdown and start of VM everything work ok - for some time. Same problem is on VM with linux and Windows. This behaviour is at random for different VM. Some VM with noVNC works, other doesn't working, at the moment.

Thanks for help

Error in log is
VM 113 qmp command 'change' failed - got timeout
TASK ERROR: Failed to run vncproxy.
Code:
proxmox-ve: 6.0-2 (running kernel: 5.0.18-1-pve)
pve-manager: 6.0-6 (running version: 6.0-6/c71f879f)
pve-kernel-5.0: 6.0-6
pve-kernel-helper: 6.0-6
pve-kernel-5.0.18-1-pve: 5.0.18-3
ceph-fuse: 12.2.11+dfsg1-2.1
corosync: 3.0.2-pve2
criu: 3.11-3
glusterfs-client: 5.5-3
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.10-pve2
libpve-access-control: 6.0-2
libpve-apiclient-perl: 3.0-2
libpve-common-perl: 6.0-4
libpve-guest-common-perl: 3.0-1
libpve-http-server-perl: 3.0-2
libpve-storage-perl: 6.0-7
libqb0: 1.0.5-1
lvm2: 2.03.02-pve3
lxc-pve: 3.1.0-64
lxcfs: 3.0.3-pve60
novnc-pve: 1.0.0-60
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.0-7
pve-cluster: 6.0-5
pve-container: 3.0-5
pve-docs: 6.0-4
pve-edk2-firmware: 2.20190614-1
pve-firewall: 4.0-7
pve-firmware: 3.0-2
pve-ha-manager: 3.0-2
pve-i18n: 2.0-2
pve-qemu-kvm: 4.0.0-5
pve-xtermjs: 3.13.2-1
qemu-server: 6.0-7
smartmontools: 7.0-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
 

starnetwork

Active Member
Dec 8, 2009
378
5
38
same for me:
# pveversion
pve-manager/6.0-6/c71f879f (running kernel: 5.0.18-1-pve)

# pveversion --verbose
proxmox-ve: 6.0-2 (running kernel: 5.0.18-1-pve)
pve-manager: 6.0-6 (running version: 6.0-6/c71f879f)
pve-kernel-5.0: 6.0-6
pve-kernel-helper: 6.0-6
pve-kernel-5.0.18-1-pve: 5.0.18-3
pve-kernel-5.0.15-1-pve: 5.0.15-1
ceph: 14.2.2-pve1
ceph-fuse: 14.2.2-pve1
corosync: 3.0.2-pve2
criu: 3.11-3
glusterfs-client: 5.5-3
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.10-pve2
libpve-access-control: 6.0-2
libpve-apiclient-perl: 3.0-2
libpve-common-perl: 6.0-4
libpve-guest-common-perl: 3.0-1
libpve-http-server-perl: 3.0-2
libpve-storage-perl: 6.0-7
libqb0: 1.0.5-1
lvm2: 2.03.02-pve3
lxc-pve: 3.1.0-64
lxcfs: 3.0.3-pve60
novnc-pve: 1.0.0-60
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.0-7
pve-cluster: 6.0-5
pve-container: 3.0-5
pve-docs: 6.0-4
pve-edk2-firmware: 2.20190614-1
pve-firewall: 4.0-7
pve-firmware: 3.0-2
pve-ha-manager: 3.0-2
pve-i18n: 2.0-2
pve-qemu-kvm: 4.0.0-5
pve-xtermjs: 3.13.2-1
qemu-server: 6.0-7
smartmontools: 7.0-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.1-pve1

Regards
 

VirtualA

New Member
Dec 7, 2018
5
0
1
40
I'm also unable to use the console feature from the proxmox web interface. I was able to use it, even after upgrading to proxmox 6 but today it doesn't work on any of my VMs. I tried rebooting individual VMs and also the root node. I'm getting the error "connection timed out"

Code:
proxmox-ve: 6.0-2 (running kernel: 5.0.15-1-pve)
pve-manager: 6.0-6 (running version: 6.0-6/c71f879f)
pve-kernel-helper: 6.0-6
pve-kernel-5.0: 6.0-5
pve-kernel-4.15: 5.4-6
pve-kernel-5.0.15-1-pve: 5.0.15-1
pve-kernel-4.15.18-18-pve: 4.15.18-44
pve-kernel-4.15.18-12-pve: 4.15.18-36
pve-kernel-4.15.18-10-pve: 4.15.18-32
pve-kernel-4.15.18-9-pve: 4.15.18-30
ceph-fuse: 12.2.11+dfsg1-2.1
corosync: 3.0.2-pve2
criu: 3.11-3
glusterfs-client: 5.5-3
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.10-pve2
libpve-access-control: 6.0-2
libpve-apiclient-perl: 3.0-2
libpve-common-perl: 6.0-4
libpve-guest-common-perl: 3.0-1
libpve-http-server-perl: 3.0-2
libpve-storage-perl: 6.0-7
libqb0: 1.0.5-1
lvm2: 2.03.02-pve3
lxc-pve: 3.1.0-64
lxcfs: 3.0.3-pve60
novnc-pve: 1.0.0-60
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.0-7
pve-cluster: 6.0-5
pve-container: 3.0-5
pve-docs: 6.0-4
pve-edk2-firmware: 2.20190614-1
pve-firewall: 4.0-7
pve-firmware: 3.0-2
pve-ha-manager: 3.0-2
pve-i18n: 2.0-2
pve-qemu-kvm: 4.0.0-5
pve-xtermjs: 3.13.2-1
qemu-server: 6.0-7
smartmontools: 7.0-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.1-pve1
 

hermelin

Member
Sep 28, 2012
27
1
23
It looks, that VMs which have problem with noVNC, need more time for showing "Summary" in web interface. Usually is less than 1 second, but if noVNC is no working is more than 3 seconds.
 

starnetwork

Active Member
Dec 8, 2009
378
5
38
in my case from some reason it's was browser issue, I checked with Firefox in same computer and it's was working for me

Regards,
 

Dominic

Proxmox Staff Member
Staff member
Mar 18, 2019
591
54
28
Thank you for bringing this to our attention! The same problem has already been reported by other users some days ago. You could help us narrow this problem down doing this.
 

hermelin

Member
Sep 28, 2012
27
1
23
I have some VM on LVM, some on LVM-thin. On both is this behaviour. Last backup was correctly on one VM which hasn't function with noVNC, but other VM backup with unfunction noVNC was failed.
 

VirtualA

New Member
Dec 7, 2018
5
0
1
40
Here is the output for two VMs.

Code:
cat /etc/pve/storage.cfg

dir: local
        path /var/lib/vz
        content backup,vztmpl,iso

lvmthin: local-lvm
        thinpool data
        vgname pve
        content images,rootdir

lvm: nas-lvm
        vgname fileserver
        content rootdir,images
        shared 1

----------------------------------------------------

qm config 100                                                                   
bootdisk: sata0
cores: 2
ide2: none,media=cdrom
memory: 8192
name: vm1
net0: virtio=62:E7:F8:94:D3:9B,bridge=vmbr0
numa: 0
onboot: 1
ostype: l26
sata0: local-lvm:vm-100-disk-0,cache=writethrough,size=10G
scsihw: virtio-scsi-pci
smbios1: uuid=f330a6eb-be01-4294-bd35-d4f861be7b4f
sockets: 2
unused0: nas-lvm:vm-100-disk-0
vmgenid: 09ef5296-96ed-4d7c-9a0c-00264d050e4f

----------------------------------------------------
qm showcmd 100 --pretty                                                         
/usr/bin/kvm \
  -id 100 \
  -name vm1 \
  -chardev 'socket,id=qmp,path=/var/run/qemu-server/100.qmp,server,nowait' \
  -mon 'chardev=qmp,mode=control' \
  -chardev 'socket,id=qmp-event,path=/var/run/qmeventd.sock,reconnect=5' \
  -mon 'chardev=qmp-event,mode=control' \
  -pidfile /var/run/qemu-server/100.pid \
  -daemonize \
  -smbios 'type=1,uuid=f330a6eb-be01-4294-bd35-d4f861be7b4f' \
  -smp '4,sockets=2,cores=2,maxcpus=4' \
  -nodefaults \
  -boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' \
  -vnc unix:/var/run/qemu-server/100.vnc,password \
  -cpu kvm64,+lahf_lm,+sep,+kvm_pv_unhalt,+kvm_pv_eoi,enforce \
  -m 8192 \
  -device 'pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f' \
  -device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' \
  -device 'vmgenid,guid=09ef5296-96ed-4d7c-9a0c-00264d050e4f' \
  -device 'piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2' \
  -device 'usb-tablet,id=tablet,bus=uhci.0,port=1' \
  -device 'VGA,id=vga,bus=pci.0,addr=0x2' \
  -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3' \
  -iscsi 'initiator-name=iqn.1993-08.org.debian:01:543f2de9734' \
  -drive 'if=none,id=drive-ide2,media=cdrom,aio=threads' \
  -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200' \
  -device 'ahci,id=ahci0,multifunction=on,bus=pci.0,addr=0x7' \
  -drive 'file=/dev/pve/vm-100-disk-0,if=none,id=drive-sata0,cache=writethrough,format=raw,aio=threads,detect-zeroes=on' \
  -device 'ide-hd,bus=ahci0.0,drive=drive-sata0,id=sata0,bootindex=100' \
  -netdev 'type=tap,id=net0,ifname=tap100i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on' \
  -device 'virtio-net-pci,mac=62:E7:F8:94:D3:9B,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=300' \
  -machine 'type=pc'

----------------------------------------------------

qm config 108                                                                 
balloon: 2048
bootdisk: scsi0
cores: 2
ide2: local:iso/debian-10.0.0-amd64-netinst.iso,media=cdrom
memory: 4096
name: vm2
net0: virtio=FE:4E:C9:11:F4:E8,bridge=vmbr0,firewall=1
numa: 0
onboot: 1
ostype: l26
scsi0: local-lvm:vm-108-disk-0,size=32G
scsihw: virtio-scsi-pci
smbios1: uuid=54533428-5577-44f6-a20c-c814059aee8c
sockets: 2
vmgenid: 4a86e4fd-1f92-495a-9c35-f32e1a3067d3

----------------------------------------------------

qm showcmd 108 --pretty                                                         
/usr/bin/kvm \
  -id 108 \
  -name vm2 \
  -chardev 'socket,id=qmp,path=/var/run/qemu-server/108.qmp,server,nowait' \
  -mon 'chardev=qmp,mode=control' \
  -chardev 'socket,id=qmp-event,path=/var/run/qmeventd.sock,reconnect=5' \
  -mon 'chardev=qmp-event,mode=control' \
  -pidfile /var/run/qemu-server/108.pid \
  -daemonize \
  -smbios 'type=1,uuid=54533428-5577-44f6-a20c-c814059aee8c' \
  -smp '4,sockets=2,cores=2,maxcpus=4' \
  -nodefaults \
  -boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' \
  -vnc unix:/var/run/qemu-server/108.vnc,password \
  -cpu kvm64,+lahf_lm,+sep,+kvm_pv_unhalt,+kvm_pv_eoi,enforce \
  -m 4096 \
  -device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' \
  -device 'pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f' \
  -device 'vmgenid,guid=4a86e4fd-1f92-495a-9c35-f32e1a3067d3' \
  -device 'piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2' \
  -device 'usb-tablet,id=tablet,bus=uhci.0,port=1' \
  -device 'VGA,id=vga,bus=pci.0,addr=0x2' \
  -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3' \
  -iscsi 'initiator-name=iqn.1993-08.org.debian:01:543f2de9734' \
  -drive 'file=/var/lib/vz/template/iso/debian-10.0.0-amd64-netinst.iso,if=none,id=drive-ide2,media=cdrom,aio=threads' \  -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200' \
  -device 'virtio-scsi-pci,id=scsihw0,bus=pci.0,addr=0x5' \
  -drive 'file=/dev/pve/vm-108-disk-0,if=none,id=drive-scsi0,format=raw,cache=none,aio=native,detect-zeroes=on' \         -device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,bootindex=100' \
  -netdev 'type=tap,id=net0,ifname=tap108i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on' \
  -device 'virtio-net-pci,mac=FE:4E:C9:11:F4:E8,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=300' \
  -machine 'type=pc'
 

adhiete

New Member
Aug 28, 2019
5
0
1
Jakarta - Indonesia
I think i hav this problem too, i cant use console or noVNC. On Container and Proxmox server itself.
Code:
cat /etc/pve/storage.cfg
dir: local
        path /var/lib/vz
        content iso,vztmpl,backup
        maxfiles 8
        shared 0

zfspool: local-zfs
        pool rpool/data
        content rootdir,images
        sparse 1
Im using CT not VM, when i run qm config id, doesnt show i hav a VM.
 

kalimero

New Member
May 15, 2019
6
0
1
30
Same problem here "connection timed out", I can't access console and so I can't create new VM.

Sep 4 23:13:14 srv-proxmox pvedaemon[2885]: starting vnc proxy UPID:srv-proxmox:00000B45:0000D8D7:5D7028EA:vncproxy:150:root@pam:
Sep 4 23:13:14 srv-proxmox pvedaemon[1439]: <root@pam> starting task UPID:srv-proxmox:00000B45:0000D8D7:5D7028EA:vncproxy:150:root@pam:
Sep 4 23:13:24 srv-proxmox pvedaemon[2885]: connection timed out
Sep 4 23:13:24 srv-proxmox pvedaemon[1439]: <root@pam> end task UPID:srv-proxmox:00000B45:0000D8D7:5D7028EA:vncproxy:150:root@pam: connection timed out
 

Glowsome

Member
Jul 25, 2017
71
9
13
47
Hello,

Just a question :

- when you upgraded, did you run the prerequisite script 'pve5to6' and if yes did it reveal/show any issues ?
 

kalimero

New Member
May 15, 2019
6
0
1
30
Hi

I didn't run the script before the upgrade :(

Here is the result now ( v6.0-7 ). Everything looks fine :

Code:
= CHECKING VERSION INFORMATION FOR PVE PACKAGES =

Checking for package updates..
WARN: updates for the following packages are available:
  postfix-sqlite, postfix

Checking proxmox-ve package version..
PASS: already upgraded to Proxmox VE 6

Checking running kernel version..
PASS: expected running kernel '5.0.21-1-pve'.

= CHECKING CLUSTER HEALTH/SETTINGS =

SKIP: standalone node.

= CHECKING HYPER-CONVERGED CEPH STATUS =

SKIP: no hyper-converged ceph setup detected!

= CHECKING CONFIGURED STORAGES =

PASS: storage 'local-lvm' enabled and active.
PASS: storage 'usb-storage' enabled and active.
PASS: storage 'local' enabled and active.

= MISCELLANEOUS CHECKS =

INFO: Checking common daemon services..
PASS: systemd unit 'pveproxy.service' is in state 'active'
PASS: systemd unit 'pvedaemon.service' is in state 'active'
PASS: systemd unit 'pvestatd.service' is in state 'active'
INFO: Checking for running guests..
WARN: 4 running guest(s) detected - consider migrating or stopping them.
INFO: Checking if the local node's hostname 'srv-proxmox' is resolvable..
INFO: Checking if resolved IP is configured on local node..
PASS: Resolved node IP '192.168.50.40' configured and active on single interface.
INFO: Check node certificate's RSA key size
PASS: Certificate 'pve-root-ca.pem' passed Debian Busters security level for TLS connections (4096 >= 2048)
PASS: Certificate 'pve-ssl.pem' passed Debian Busters security level for TLS connections (2048 >= 2048)
INFO: Checking KVM nesting support, which breaks live migration for VMs using it..
PASS: KVM nested parameter not set.

= SUMMARY =

TOTAL:    16
PASSED:   12
SKIPPED:  2
WARNINGS: 2
FAILURES: 0

ATTENTION: Please check the output for detailed information!
 

kalimero

New Member
May 15, 2019
6
0
1
30
What's wrong ??
Fresh install (proxmox-ve_6.0-1.iso), I restore the VM and same problem !

Sep 6 23:54:23 srv-proxmox pvedaemon[4934]: connection timed out
Sep 6 23:54:23 srv-proxmox pvedaemon[1142]: <root@pam> end task UPID:srv-proxmox:00001346:0002CBE3:5D72D585:vncproxy:101:root@pam: connection timed out
Sep 6 23:55:00 srv-proxmox systemd[1]: Starting Proxmox VE replication runner...
 

kalimero

New Member
May 15, 2019
6
0
1
30
Pff, it works with Brave and Firefox. VNC doesn't work with Vivaldi, Chrome, Edge
 

hermelin

Member
Sep 28, 2012
27
1
23
After yesterday backup many VMs error like this:
100: 2019-09-06 20:00:02 INFO: Starting Backup of VM 100 (qemu)
100: 2019-09-06 20:00:02 INFO: status = running
100: 2019-09-06 20:00:03 INFO: update VM 100: -lock backup
100: 2019-09-06 20:00:03 INFO: VM Name: server1
100: 2019-09-06 20:00:03 INFO: include disk 'virtio0' 'vm:vm-100-disk-1' 500G
100: 2019-09-06 20:00:04 INFO: backup mode: snapshot
100: 2019-09-06 20:00:04 INFO: ionice priority: 7
100: 2019-09-06 20:00:04 INFO: creating archive '/mnt/autofs/dump/vzdump-qemu-100-2019_09_06-20_00_02.vma.lzo'
100: 2019-09-06 20:00:11 ERROR: got timeout
100: 2019-09-06 20:00:11 INFO: aborting backup job
100: 2019-09-06 20:10:11 ERROR: VM 100 qmp command 'backup-cancel' failed - got timeout
100: 2019-09-06 20:10:12 ERROR: Backup of VM 100 failed - got timeout
VMs are running, but don't work noVNC console and backup. It look's like problem with qemu-server sockets, may be.
 

kalimero

New Member
May 15, 2019
6
0
1
30
After yesterday backup many VMs error like this:


VMs are running, but don't work noVNC console and backup. It look's like problem with qemu-server sockets, may be.
Did you try with Firefox ? For me Console work with Firefox and Brave browsers.
 

Glowsome

Member
Jul 25, 2017
71
9
13
47
Are you willing to try a debian-install and moving to PVE6 after ? .. as i am using this type of install for all my PVE machines (4) in a cluster-setup , and i have not faced the issues you are experiencing, next to the fact that i feel more 'in control' when installing PVE and configuring it.

Background :
As i require/wish a different partitioning scheme then the pve6 install iso can offer i use the debian-to-Pve install method ( described in https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Buster , and previously used on jessie/PVE5 ( also available on the wiki)
 

hermelin

Member
Sep 28, 2012
27
1
23
I am upgrade to 6 using https://pve.proxmox.com/wiki/Upgrade_from_5.x_to_6.0. All packages are sucessfully upgraded to Buster and Proxmox 6 packages upgraded too without problem. NoVNC console is working if I newly start VMs, but after some time of running VMs appear the problem. So it doesnt look to problem with configuration but some bug in the system.
 

ultrabizweb

New Member
Oct 27, 2009
2
0
1
I can verify the issue as well I am however able to use noVNC with firefox browser but not chrome or Edge. I just upgraded from Proxmox 5 to 6 today using the documentation. My system was installed via Debian not the community ISO.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!