VNC console error

decibel83

Renowned Member
Oct 15, 2008
210
1
83
Hi.
I'm experiencing a lot of problems on the VNC console.
If I try to open the VNC console of a KVM virtual machine I get this error:

Failed to connect to server (code: 1006).

This happens on every PVE nodes and from Chrome, Safari and Firefox, and on all of my virtual machines.
The virtual machine is running, of course.

From the tasks log I see the following:

Code:
[COLOR=#000000][FONT=tahoma]TASK ERROR: command '/bin/nc -l -p 5900 -w 10 -c '/usr/bin/ssh -T -o BatchMode=yes 192.168.60.1 /usr/sbin/qm vncproxy 101 2>/dev/null'' failed: exit code 255[/FONT][/COLOR]


If I execute this command from the console and try to telnet to the port 5900 of the node the connection works:

Code:
root@node1:~# /bin/nc -l -p 5900 -w 10 -c '/usr/bin/ssh -T -o BatchMode=yes 192.168.60.1 /usr/sbin/qm vncproxy 101'

MyClient:~ mattia$ telnet 192.168.60.1 5900
Trying 192.168.60.1...
Connected to 192.168.60.1.
Escape character is '^]'.
RFB 003.008
[/code]

My PVE cluster is updated:

Code:
root@node1:~# pveversion -v
proxmox-ve-2.6.32: 3.4-150 (running kernel: 2.6.32-37-pve)
pve-manager: 3.4-3 (running version: 3.4-3/2fc72fee)
pve-kernel-2.6.32-37-pve: 2.6.32-150
pve-kernel-2.6.32-26-pve: 2.6.32-114
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-2
pve-cluster: 3.0-16
qemu-server: 3.4-3
pve-firmware: 1.1-4
libpve-common-perl: 3.0-24
libpve-access-control: 3.0-16
libpve-storage-perl: 3.0-32
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.2-8
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1

Could you help me please?
 
Perhaps a fix! I went to my proxmox host, /etc/pve/ and download the pve-root-ca.pem and then opened it in Mac. I then got the option to always trust this server and novnc works!
 
I've a cluster with node1 and node2.
When I manage cluster via https://node1 I'm able to use NoVNC only for VMs at node1.
When I try to open NoVNC for VMs at node2 I get above TASK ERROR.
So to be able to manage VMs at node2 I need to open https://node2.
Is it expected? I use last PVE 3.4.
As far as I remember it worked in 3.2
 
@ppo Thats interesting. I use one server to manage my cluster. One server that I use to create VM's, manage, and console through.
 
I've a cluster with node1 and node2.
When I manage cluster via https://node1 I'm able to use NoVNC only for VMs at node1.
When I try to open NoVNC for VMs at node2 I get above TASK ERROR.
So to be able to manage VMs at node2 I need to open https://node2.
Is it expected? I use last PVE 3.4.
As far as I remember it worked in 3.2


maybe try to use full fqdn dns hostname

also, your nodes need also to be able to resolve theses hostnames.
 
What is the difference between
/etc/pve/priv/known_hosts
and
/root/.ssh/known_hosts?

When I ssh by name I got access:
Code:
root@pve1:~# ssh pve2
Linux pve2 2.6.32-39-pve #1 SMP Fri May 8 11:27:35 CEST 2015 x86_64

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Wed Jun 24 09:31:02 2015 from pve1.bla.bla
root@pve2:~#
But when I ssh to same host by IP I got:
Code:
root@pve1:~# ssh 192.168.5.229
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@    WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!     @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that a host key has just been changed.
The fingerprint for the ECDSA key sent by the remote host is
1e:be:79:c5:a7:f5:9e:a2:bd:30:7d:6c:84:73:2c:e7.
Please contact your system administrator.
Add correct host key in /root/.ssh/known_hosts to get rid of this message.
Offending RSA key in /etc/ssh/ssh_known_hosts:4
ECDSA host key for 192.168.5.229 has changed and you have requested strict checking.
Host key verification failed.

I believe this is because I connected to pve2 (192.168.5.229) from pve1 before I've reinstalled pve2.
Is it safe to just erase /root/.ssh/known_hosts and /etc/pve/priv/known_hosts (/etc/ssh/ssh_known_hosts is the symlink to /etc/pve/priv/known_hosts) so these files will be filled next time when I ssh from pve1 to any others hosts?

Should all known_hosts files be equal at all nodes in cluster?
 
I think I've solved my noVNC connection error from one node to other:
Code:
TASK ERROR: command '/bin/nc -l -p 5900 -w 10 -c '/usr/bin/ssh -T -o BatchMode=yes 192.168.5.229 /usr/sbin/qm vncproxy 123 2>/dev/null'' failed: exit code 255
Somehow it was related to known_hosts in my previous post.
I've cleaned /root/.ssh/known_hosts and deleted fourth line in /etc/pve/priv/known_hosts.
After that I ssh-ed from node1 to node2 and new lines has been added to /root/.ssh/known_hosts and now I'm able to ssh to node2 without error and noVNC works from node1 to node2.
 
Hi all,
I'm using Proxmox4 Beta on Debian Jessie. For me it worked to use the 'Always trust'-Option in Safari. This imports the SSL Certificate to your keychain. noVNC works like a charm now.

best;
Ole
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!