Proxmox failed to run vncproxy error

Martinjack

New Member
Apr 2, 2020
9
0
1
31
Hello,
Proxmox has been installed for three months.
when I installed centos found Unable to access the all VMs Console.
Failed to connect to service TASK ERROR: Failed to run vncpy.
but Proxmox shell works.
Snipaste_2020-04-02_13-54-54.png
Snipaste_2020-04-02_13-55-58.png
Who can help me? Thank you
 
Last edited:
Hi,
Did you installed CentOS with noVNC and after install noVNC not running on all VMs?

are you tried with different browser to access noVNC?

Please post all of
Version of proxmox: pveversion -v

Config for CentOS: qm config <VMID>
 
proxmox-ve: 6.1-2 (running kernel: 5.3.18-3-pve)
pve-manager: 6.1-8 (running version: 6.1-8/806edfe1)
pve-kernel-helper: 6.1-7
pve-kernel-5.3: 6.1-6
pve-kernel-5.0: 6.0-11
pve-kernel-5.3.18-3-pve: 5.3.18-3
pve-kernel-5.3.13-1-pve: 5.3.13-1
pve-kernel-5.0.21-5-pve: 5.0.21-10
pve-kernel-5.0.15-1-pve: 5.0.15-1
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.3-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.15-pve1
libpve-access-control: 6.0-6
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.0-17
libpve-guest-common-perl: 3.0-5
libpve-http-server-perl: 3.0-5
libpve-storage-perl: 6.1-5
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 3.2.1-1
lxcfs: 4.0.1-pve1
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.1-3
pve-cluster: 6.1-4
pve-container: 3.0-23
pve-docs: 6.1-6
pve-edk2-firmware: 2.20200229-1
pve-firewall: 4.0-10
pve-firmware: 3.0-6
pve-ha-manager: 3.0-9
pve-i18n: 2.0-4
pve-qemu-kvm: 4.1.1-4
pve-xtermjs: 4.3.0-1
qemu-server: 6.1-7
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.3-pve1
 
I've used different browser to access noVNC like Firefox Chrome
Also changed macbook air open it get The same problem
 
Last edited:
Apr 02 17:40:22 pve kernel: ata1.00: exception Emask 0x0 SAct 0x100 SErr 0x0 action 0x0
Apr 02 17:40:22 pve kernel: ata1.00: irq_stat 0x40000008
Apr 02 17:40:22 pve kernel: ata1.00: failed command: READ FPDMA QUEUED
Apr 02 17:40:22 pve kernel: ata1.00: cmd 60/08:40:20:9c:bf/00:00:00:00:00/40 tag 8 ncq dma 4096 in
res 41/40:40:20:9c:bf/00:00:00:00:00/a0 Emask 0x409 (media error) <F>
Apr 02 17:40:22 pve kernel: ata1.00: status: { DRDY ERR }
Apr 02 17:40:22 pve kernel: ata1.00: error: { UNC }
Apr 02 17:40:22 pve kernel: ata1.00: configured for UDMA/133
Apr 02 17:40:22 pve kernel: sd 0:0:0:0: [sda] tag#8 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
Apr 02 17:40:22 pve kernel: sd 0:0:0:0: [sda] tag#8 Sense Key : Medium Error [current]
Apr 02 17:40:22 pve kernel: sd 0:0:0:0: [sda] tag#8 Add. Sense: Unrecovered read error - auto reallocate failed
Apr 02 17:40:22 pve kernel: sd 0:0:0:0: [sda] tag#8 CDB: Read(10) 28 00 00 bf 9c 20 00 00 08 00
Apr 02 17:40:22 pve kernel: blk_update_request: I/O error, dev sda, sector 12557344 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
Apr 02 17:40:22 pve kernel: ata1: EH complete
Apr 02 17:40:22 pve kernel: ata1.00: exception Emask 0x0 SAct 0x1 SErr 0x0 action 0x0
Apr 02 17:40:22 pve kernel: ata1.00: irq_stat 0x40000008
Apr 02 17:40:22 pve kernel: ata1.00: failed command: READ FPDMA QUEUED
Apr 02 17:40:22 pve kernel: ata1.00: cmd 60/08:00:20:9c:bf/00:00:00:00:00/40 tag 0 ncq dma 4096 in
res 41/40:00:20:9c:bf/00:00:00:00:00/a0 Emask 0x409 (media error) <F>
Apr 02 17:40:22 pve kernel: ata1.00: status: { DRDY ERR }
Apr 02 17:40:22 pve kernel: ata1.00: error: { UNC }
Apr 02 17:40:22 pve kernel: ata1.00: configured for UDMA/133
Apr 02 17:40:22 pve kernel: sd 0:0:0:0: [sda] tag#0 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
Apr 02 17:40:22 pve kernel: sd 0:0:0:0: [sda] tag#0 Sense Key : Medium Error [current]
Apr 02 17:40:22 pve kernel: sd 0:0:0:0: [sda] tag#0 Add. Sense: Unrecovered read error - auto reallocate failed
Apr 02 17:40:22 pve kernel: sd 0:0:0:0: [sda] tag#0 CDB: Read(10) 28 00 00 bf 9c 20 00 00 08 00
Apr 02 17:40:22 pve kernel: blk_update_request: I/O error, dev sda, sector 12557344 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
Apr 02 17:40:22 pve kernel: ata1: EH complete
Apr 02 17:48:59 pve systemd[1]: session-40.scope: Succeeded.
Apr 02 17:48:59 pve systemd-logind[719]: Session 40 logged out. Waiting for processes to exit.
Apr 02 17:48:59 pve systemd-logind[719]: Removed session 40.
Apr 02 17:48:59 pve pvedaemon[2144]: <root@pam> end task UPID:pve:00007A4A:004C4CC5:5E85B21B:vncshell::root@pam: OK
Apr 02 17:49:00 pve systemd[1]: Starting Proxmox VE replication runner...
Apr 02 17:49:02 pve systemd[1]: pvesr.service: Succeeded.
Apr 02 17:49:02 pve systemd[1]: Started Proxmox VE replication runner.
Apr 02 17:49:09 pve systemd[1]: Stopping User Manager for UID 0...
Apr 02 17:49:09 pve systemd[30941]: Stopped target Default.
Apr 02 17:49:09 pve systemd[30941]: Stopped target Basic System.
Apr 02 17:49:09 pve systemd[30941]: Stopped target Sockets.
Apr 02 17:49:09 pve systemd[30941]: gpg-agent.socket: Succeeded.
Apr 02 17:49:09 pve systemd[30941]: Closed GnuPG cryptographic agent and passphrase cache.
Apr 02 17:49:09 pve systemd[30941]: gpg-agent-extra.socket: Succeeded.
Apr 02 17:49:09 pve systemd[30941]: Closed GnuPG cryptographic agent and passphrase cache (restricted).
Apr 02 17:49:09 pve systemd[30941]: dirmngr.socket: Succeeded.
Apr 02 17:49:09 pve systemd[30941]: Closed GnuPG network certificate management daemon.
Apr 02 17:49:09 pve systemd[30941]: gpg-agent-ssh.socket: Succeeded.
Apr 02 17:49:09 pve systemd[30941]: Closed GnuPG cryptographic agent (ssh-agent emulation).
Apr 02 17:49:09 pve systemd[30941]: Stopped target Paths.
Apr 02 17:49:09 pve systemd[30941]: Stopped target Timers.
Apr 02 17:49:09 pve systemd[30941]: gpg-agent-browser.socket: Succeeded.
Apr 02 17:49:09 pve systemd[30941]: Closed GnuPG cryptographic agent and passphrase cache (access for web browsers).
Apr 02 18:13:02 pve systemd[1]: pvesr.service: Succeeded.
Apr 02 18:13:02 pve systemd[1]: Started Proxmox VE replication runner.
Apr 02 18:13:05 pve pvedaemon[2146]: <root@pam> starting task UPID:pve:00001239:004FA766:5E85BAB1:vncproxy:100:root@pam:
Apr 02 18:13:05 pve pvedaemon[4665]: starting vnc proxy UPID:pve:00001239:004FA766:5E85BAB1:vncproxy:100:root@pam:
Apr 02 18:13:07 pve kernel: ata1.00: exception Emask 0x0 SAct 0x10 SErr 0x0 action 0x0
Apr 02 18:13:07 pve kernel: ata1.00: irq_stat 0x40000008
Apr 02 18:13:07 pve kernel: ata1.00: failed command: READ FPDMA QUEUED
Apr 02 18:13:07 pve kernel: ata1.00: cmd 60/08:20:20:9c:bf/00:00:00:00:00/40 tag 4 ncq dma 4096 in
res 41/40:20:20:9c:bf/00:00:00:00:00/a0 Emask 0x409 (media error) <F>
Apr 02 18:13:07 pve kernel: ata1.00: status: { DRDY ERR }
Apr 02 18:13:07 pve kernel: ata1.00: error: { UNC }
Apr 02 18:13:07 pve kernel: ata1.00: configured for UDMA/133
Apr 02 18:13:07 pve kernel: sd 0:0:0:0: [sda] tag#4 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
Apr 02 18:13:07 pve kernel: sd 0:0:0:0: [sda] tag#4 Sense Key : Medium Error [current]
Apr 02 18:13:07 pve kernel: sd 0:0:0:0: [sda] tag#4 Add. Sense: Unrecovered read error - auto reallocate failed
Apr 02 18:13:07 pve kernel: sd 0:0:0:0: [sda] tag#4 CDB: Read(10) 28 00 00 bf 9c 20 00 00 08 00
Apr 02 18:13:07 pve kernel: blk_update_request: I/O error, dev sda, sector 12557344 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
Apr 02 18:13:07 pve kernel: ata1: EH complete
Apr 02 18:13:07 pve kernel: ata1.00: exception Emask 0x0 SAct 0x80 SErr 0x0 action 0x0
Apr 02 18:13:07 pve kernel: ata1.00: irq_stat 0x40000008
Apr 02 18:13:07 pve kernel: ata1.00: failed command: READ FPDMA QUEUED
Apr 02 18:13:07 pve kernel: ata1.00: cmd 60/08:38:20:9c:bf/00:00:00:00:00/40 tag 7 ncq dma 4096 in
res 41/40:38:20:9c:bf/00:00:00:00:00/a0 Emask 0x409 (media error) <F>
Apr 02 18:13:07 pve kernel: ata1.00: status: { DRDY ERR }
Apr 02 18:13:07 pve kernel: ata1.00: error: { UNC }
Apr 02 18:13:07 pve kernel: ata1.00: configured for UDMA/133
Apr 02 18:13:07 pve kernel: sd 0:0:0:0: [sda] tag#7 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
Apr 02 18:13:07 pve kernel: sd 0:0:0:0: [sda] tag#7 Sense Key : Medium Error [current]
Apr 02 18:13:07 pve kernel: sd 0:0:0:0: [sda] tag#7 Add. Sense: Unrecovered read error - auto reallocate failed
Apr 02 18:13:07 pve kernel: sd 0:0:0:0: [sda] tag#7 CDB: Read(10) 28 00 00 bf 9c 20 00 00 08 00
Apr 02 18:13:07 pve kernel: blk_update_request: I/O error, dev sda, sector 12557344 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
Apr 02 18:13:07 pve kernel: ata1: EH complete
Apr 02 18:13:07 pve pvedaemon[4665]: Failed to run vncproxy.
Apr 02 18:13:07 pve pvedaemon[2146]: <root@pam> end task UPID:pve:00001239:004FA766:5E85BAB1:vncproxy:100:root@pam: Failed to run vncproxy.
 
Hi,

Apr 02 18:13:07 pve kernel: ata1.00: exception Emask 0x0 SAct 0x10 SErr 0x0 action 0x0
Apr 02 18:13:07 pve kernel: ata1.00: irq_stat 0x40000008
Apr 02 18:13:07 pve kernel: ata1.00: failed command: READ FPDMA QUEUED
Apr 02 18:13:07 pve kernel: ata1.00: cmd 60/08:20:20:9c:bf/00:00:00:00:00/40 tag 4 ncq dma 4096 in
res 41/40:20:20:9c:bf/00:00:00:00:00/a0 Emask 0x409 (media error) <F>

The issue from your disk, check your disk with smartctl
 
Hello,
I have the same problem.

I have pve 6.0-4 and two machines.
- both Centos7 1708 systems,
- there is no agent on both machines,
- I checked qemu-img info - identical values,
- file permissions - also identical,
- test by two browsers, also incognito
- I had my SSL certificate, but according to your instructions - I deleted it and generated the system pve again,
- lvm - old layout

pveversion -v
proxmox-ve: 6.0-2 (running kernel: 5.0.15-1-pve)
pve-manager: 6.0-4 (running version: 6.0-4/2a719255)
pve-kernel-5.0: 6.0-5
pve-kernel-helper: 6.0-5
pve-kernel-5.0.15-1-pve: 5.0.15-1
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.2-pve2
criu: 3.11-3
glusterfs-client: 5.5-3
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.10-pve1
libpve-access-control: 6.0-2
libpve-apiclient-perl: 3.0-2
libpve-common-perl: 6.0-2
libpve-guest-common-perl: 3.0-1
libpve-http-server-perl: 3.0-2
libpve-storage-perl: 6.0-5
libqb0: 1.0.5-1
lvm2: 2.03.02-pve3
lxc-pve: 3.1.0-61
lxcfs: 3.0.3-pve60
novnc-pve: 1.0.0-60
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.0-5
pve-cluster: 6.0-4
pve-container: 3.0-3
pve-docs: 6.0-4
pve-edk2-firmware: 2.20190614-1
pve-firewall: 4.0-5
pve-firmware: 3.0-2
pve-ha-manager: 3.0-2
pve-i18n: 2.0-2
pve-qemu-kvm: 4.0.0-3
pve-xtermjs: 3.13.2-1
qemu-server: 6.0-5
smartmontools: 7.0-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.1-pve1

qm config 210
bootdisk: sata0
cores: 2
description:
ide2: nas:iso/CentOS-7-x86_64-Minimal-1708.iso,media=cdrom
memory: 12288
name: vmM1
net0: virtio=xx:xx:xx:xx:xx:xx,bridge=vmbr0,tag=123
numa: 0
ostype: l26
sata0: local:210/vm-210-disk-1.qcow2,size=200G
scsihw: virtio-scsi-pci
smbios1: uuid=b695c51d-3860-41b0-85be-241decba1f15
sockets: 1

qm config 212
bootdisk: sata0
cores: 2
description:
ide2: nas:iso/CentOS-7-x86_64-Minimal-1708.iso,media=cdrom
memory: 4096
name: vmM2
net0: virtio=xx:xx:xx:xx:xx:xx,bridge=vmbr0,tag=123
numa: 0
ostype: l26
sata0: local:212/vm-212-disk-1.qcow2,size=50G
scsihw: virtio-scsi-pci
smbios1: uuid=5b5a1085-94ba-4935-b78f-747ef402af70
sockets: 1

drwxr-xr-x 2 root root 4096 Oct 27 19:39 210
drwxr-xr-x 2 root root 4096 Oct 30 10:11 212

qemu-img info 210/vm-210-disk-1.qcow2
image: 210/vm-210-disk-1.qcow2
file format: qcow2
virtual size: 200G (214748364800 bytes)
disk size: 104G
cluster_size: 65536
Format specific information:
compat: 1.1
lazy refcounts: false
refcount bits: 16
corrupt: false

qemu-img info 212/vm-212-disk-1.qcow2
image: 212/vm-212-disk-1.qcow2
file format: qcow2
virtual size: 50G (53687091200 bytes)
disk size: 3.3G
cluster_size: 65536
Format specific information:
compat: 1.1
lazy refcounts: false
refcount bits: 16
corrupt: false

machine with problem:

Apr 3 21:09:32 pve pvedaemon[47006]: starting vnc proxy UPID:pve:0000B79E:540B06A2:5E8789EC:vncproxy:210:root@pam:
Apr 3 21:09:32 pve pvedaemon[29618]: <root@pam> starting task UPID:pve:0000B79E:540B06A2:5E8789EC:vncproxy:210:root@pam:
Apr 3 21:09:38 pve pvestatd[1313]: status update time (6.205 seconds)
Apr 3 21:09:49 pve pvestatd[1313]: status update time (6.213 seconds)
Apr 3 21:09:58 pve pvestatd[1313]: status update time (6.217 seconds)
Apr 3 21:10:00 pve systemd[1]: Starting Proxmox VE replication runner...
Apr 3 21:10:00 pve systemd[1]: pvesr.service: Succeeded.
Apr 3 21:10:00 pve systemd[1]: Started Proxmox VE replication runner.
Apr 3 21:10:06 pve qm[46914]: VM 210 qmp command failed - VM 210 qmp command 'change' failed - got timeout
Apr 3 21:10:06 pve pvedaemon[46912]: Failed to run vncproxy.
Apr 3 21:10:06 pve pvedaemon[43244]: <root@pam> end task UPID:pve:0000B740:540AFBD6:5E8789D1:vncproxy:210:root@pam: Failed to run vncproxy.
Apr 3 21:10:08 pve pvestatd[1313]: status update time (6.202 seconds)
Apr 3 21:10:28 pve pvestatd[1313]: status update time (6.215 seconds)
Apr 3 21:10:39 pve pvestatd[1313]: status update time (6.275 seconds)
Apr 3 21:10:48 pve pvestatd[1313]: status update time (6.192 seconds)
Apr 3 21:10:58 pve pvestatd[1313]: status update time (6.285 seconds)
Apr 3 21:11:00 pve systemd[1]: Starting Proxmox VE replication runner...
Apr 3 21:11:01 pve systemd[1]: pvesr.service: Succeeded.
Apr 3 21:11:01 pve systemd[1]: Started Proxmox VE replication runner.
Apr 3 21:11:06 pve qm[47021]: VM 210 qmp command failed - VM 210 qmp command 'change' failed - got timeout
Apr 3 21:11:06 pve pvedaemon[47006]: Failed to run vncproxy.
Apr 3 21:11:06 pve pvedaemon[29618]: <root@pam> end task UPID:pve:0000B79E:540B06A2:5E8789EC:vncproxy:210:root@pam: Failed to run vncproxy.
Apr 3 21:11:08 pve pvestatd[1313]: status update time (6.186 seconds)

machine without problem:

Apr 3 21:12:33 pve pvedaemon[47458]: starting vnc proxy UPID:pve:0000B962:540B4D1F:5E878AA1:vncproxy:212:root@pam:
Apr 3 21:12:33 pve pvedaemon[46132]: <root@pam> starting task UPID:pve:0000B962:540B4D1F:5E878AA1:vncproxy:212:root@pam:
 
Hello,
I have the same problem.

I have pve 6.0-4 and two machines.
- both Centos7 1708 systems,
- there is no agent on both machines,
- I checked qemu-img info - identical values,
- file permissions - also identical,
- test by two browsers, also incognito
- I had my SSL certificate, but according to your instructions - I deleted it and generated the system pve again,
- lvm - old layout













machine with problem:



machine without problem:
Thank you share But my problem is that SSD actually have bad logic I'm going to reinstall it when I Secure Erase SSD
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!