cant open console, get error message

informant

Renowned Member
Jan 31, 2012
780
10
83
Hi, if i was open on 5 our nodes the console, i get the following error message:

Code:
FAILED to connet to Server

Host key verification failed.
TASK ERROR: Failed to run vncproxy.

on log in cluster i found:

Code:
Jan 30 10:21:28 pegasus systemd[1]: Started PVE API Proxy Server.
Jan 30 10:21:34 pegasus pvedaemon[1577]: <admin@pve> starting task UPID:pegasus:000011DD:0002276A:5C516C9E:vncproxy:4130:admin@pve:
Jan 30 10:21:34 pegasus pvedaemon[4573]: starting vnc proxy UPID:pegasus:000011DD:0002276A:5C516C9E:vncproxy:4130:admin@pve:
Jan 30 10:21:34 pegasus pvedaemon[4573]: Failed to run vncproxy.
Jan 30 10:21:34 pegasus pvedaemon[1577]: <admin@pve> end task UPID:pegasus:000011DD:0002276A:5C516C9E:vncproxy:4130:admin@pve: Failed to run vncproxy.


all other nodes works and we can open console normal. what is the problem here and how can we fix it? i have read other entrys in forum, but no posts help here. any ideas?
best regards
 
Last edited:
'Host key verification failed.' sounds like your RSA key of the host has changed (did you reinstall something?) and does not match the one in the .ssh/known_hosts file... Have you tried to ssh to the node directly?
 
hi chris, i can connect normal from node to cluster and cluster to node with ssh <host>, works fine.
i don´t change a host or key. do you have a idea to fix it?
regards
 
Hmmm I see you run as admin@pve and not as root@pam... Can you try if you encounter the same issue also as root?
 
hi, shure, i have done. but same issue. console bring error, if i open conole of node.
log of cluster:
Code:
an 30 12:05:56 pegasus pvedaemon[1576]: <root@pam> successful auth for user 'root@pam'
Jan 30 12:05:59 pegasus pvedaemon[1576]: <root@pam> starting task UPID:pegasus:00003682:000BB6A6:5C518517:vncproxy:4132:root@pam:
Jan 30 12:05:59 pegasus pvedaemon[13954]: starting vnc proxy UPID:pegasus:00003682:000BB6A6:5C518517:vncproxy:4132:root@pam:
Jan 30 12:05:59 pegasus pvedaemon[13954]: Failed to run vncproxy.
Jan 30 12:05:59 pegasus pvedaemon[1576]: <root@pam> end task UPID:pegasus:00003682:000BB6A6:5C518517:vncproxy:4132:root@pam: Failed to run vncproxy.
 
Last edited:
a idea: in /etc/pve/priv know_hosts are many entrys, more as i have nodes. can i clean it and readd again with ssh <node> or what do this file. /etc/pve/authorized_keys have normal entrys. need help to make cosnole back working. regards
 
You should maybe check the fingerprints of all your nodes via 'ssh-keyscan host' first, don't simply delete entries...
 
hi if i use command ssh-keyscan mynodename and press enter no info comes only new line in ssh, normal? i have done it on my cluster and a node that no work with console. after this command i can show on all nodes where i have use this command rrd graphs only sporadical. normal? where can i remove all know_hosts and add it new, i mean it have old wrong keys in file, but i dont know in which files i can delete all and add again how new?
 
Last edited:
ssh-keyscan should work with the IP or with the hostname if it is resolvable... Anyway since you are able to connect via ssh without any problems, known_hosts is probably not the cause of the error. Don't delete entries there.
Did you touch the SSL certificates on the nodes recently, this might as well cause host key verification errors!?
 
hi but all nodes and the cluster have ssh keys from other,,,, in know_hosts. can i remove know_hosts in pvw and root/.ssh and add again to solve. or whats you idea?
 
Hi chris, i have test again, i can connect from cluster to nodes and from nodes to cluster with ssh <nodename/clustername> directly. but console bring error, what can i do to debug it and get correct error?
i have test again, login as root over ndoe-ip, here i can start console normal without problem. only if i login ofer a normal user or admin over pve and cluster, the error comes. as root i can connect in all console on all nodes normal. as user or admin over pve login not.
need a solution :(
regards
 
Last edited:
Just as side note, if you run in production I strongly suggest you to get an enterprise subscription https://www.proxmox.com/en/proxmox-ve/pricing
Oh so what did you change to have it working for the root account? This is new...
Did the error message change? Do the users have the right privileges / are part of the right group? Sys.Console: console access to Node, VM.Console: console access to VM.
What version of pve are you running (pveversion -v)? Are all nodes on the same version?
 
hi @Chris, yes, if i login as root, it works fine, as admin/user come the error in console. i haven´t change nothing, only reboot node that has error.
version are the same on nodes and cluster.

yes rights are correct, well only 2 nodes have this problem, all other nodes i can conenct to console as user/admin.
User admin is in group administrator and they has administrator as privileges.

as error in node i have found today on connect to console this error:
Code:
Jan 31 14:52:01 daedalus sshd[25354]: rexec line 23: Deprecated option KeyRegenerationInterval
Jan 31 14:52:01 daedalus sshd[25354]: rexec line 24: Deprecated option ServerKeyBits
Jan 31 14:52:01 daedalus sshd[25354]: rexec line 35: Deprecated option RSAAuthentication
Jan 31 14:52:01 daedalus sshd[25354]: rexec line 42: Deprecated option RhostsRSAAuthentication
Jan 31 14:52:01 daedalus sshd[25354]: Connection closed by 217.88.200.67 port 56384 [preauth]

node:
Code:
 pveversion -v
proxmox-ve: 5.3-1 (running kernel: 4.15.18-9-pve)
pve-manager: 5.3-8 (running version: 5.3-8/2929af8e)
pve-kernel-4.15: 5.3-1
pve-kernel-4.15.18-10-pve: 4.15.18-32
pve-kernel-4.15.18-9-pve: 4.15.18-30
pve-kernel-4.10.15-1-pve: 4.10.15-15
corosync: 2.4.4-pve1
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.1-3
libpve-apiclient-perl: 2.0-5
libpve-common-perl: 5.0-43
libpve-guest-common-perl: 2.0-19
libpve-http-server-perl: 2.0-11
libpve-storage-perl: 5.0-36
libqb0: 1.0.3-1~bpo9
lvm2: 2.02.168-pve6
lxc-pve: 3.1.0-2
lxcfs: 3.0.2-2
novnc-pve: 1.0.0-2
proxmox-widget-toolkit: 1.0-22
pve-cluster: 5.0-33
pve-container: 2.0-33
pve-docs: 5.3-1
pve-edk2-firmware: 1.20181023-1
pve-firewall: 3.0-17
pve-firmware: 2.0-6
pve-ha-manager: 2.0-6
pve-i18n: 1.0-9
pve-libspice-server1: 0.14.1-1
pve-qemu-kvm: 2.12.1-1
pve-xtermjs: 3.10.1-1
qemu-server: 5.0-45
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
zfsutils-linux: 0.7.12-pve1~bpo1

cluster:
Code:
pveversion -v
proxmox-ve: 5.3-1 (running kernel: 4.15.18-10-pve)
pve-manager: 5.3-8 (running version: 5.3-8/2929af8e)
pve-kernel-4.15: 5.3-1
pve-kernel-4.15.18-10-pve: 4.15.18-32
pve-kernel-4.15.18-9-pve: 4.15.18-30
pve-kernel-4.15.18-7-pve: 4.15.18-27
pve-kernel-4.15.18-4-pve: 4.15.18-23
pve-kernel-4.15.18-1-pve: 4.15.18-19
pve-kernel-4.15.17-3-pve: 4.15.17-14
pve-kernel-4.15.17-2-pve: 4.15.17-10
pve-kernel-4.10.15-1-pve: 4.10.15-15
corosync: 2.4.4-pve1
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.1-3
libpve-apiclient-perl: 2.0-5
libpve-common-perl: 5.0-43
libpve-guest-common-perl: 2.0-19
libpve-http-server-perl: 2.0-11
libpve-storage-perl: 5.0-36
libqb0: 1.0.3-1~bpo9
lvm2: 2.02.168-pve6
lxc-pve: 3.1.0-2
lxcfs: 3.0.2-2
novnc-pve: 1.0.0-2
proxmox-widget-toolkit: 1.0-22
pve-cluster: 5.0-33
pve-container: 2.0-33
pve-docs: 5.3-1
pve-edk2-firmware: 1.20181023-1
pve-firewall: 3.0-17
pve-firmware: 2.0-6
pve-ha-manager: 2.0-6
pve-i18n: 1.0-9
pve-libspice-server1: 0.14.1-1
pve-qemu-kvm: 2.12.1-1
pve-xtermjs: 3.10.1-1
qemu-server: 5.0-45
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
 
Last edited:
It seems to me that you might have a sshd configuration issue, please check your '/etc/ssh/sshd_config'
 
ok i do check it and make test with a other working config, but chec before different to a working, i send you feedback later. regards
 
hi @Chris and others, i have test it, i have use a sshd_conf of a working node and use it on a node, thats don´t work, restart ssh service, but error are the same. login to console does not work as user/admin. error in syslog are the same. any ideas? hope you or anyone can help here. regards

ps: what i have found is -> the error in syslog is only a info, that commands are old and i can clean it in sshd_config:
Code:
Jan 31 20:06:22 daedalus sshd[1909]: /etc/ssh/sshd_config line 23: Deprecated option KeyRegenerationInterval
Jan 31 20:06:22 daedalus sshd[1909]: /etc/ssh/sshd_config line 24: Deprecated option ServerKeyBits
Jan 31 20:06:22 daedalus sshd[1909]: /etc/ssh/sshd_config line 35: Deprecated option RSAAuthentication
Jan 31 20:06:22 daedalus sshd[1909]: /etc/ssh/sshd_config line 42: Deprecated option RhostsRSAAuthentication
Jan 31 20:06:22 daedalus sshd[1913]: /etc/ssh/sshd_config line 23: Deprecated option KeyRegenerationInterval
Jan 31 20:06:22 daedalus sshd[1913]: /etc/ssh/sshd_config line 24: Deprecated option ServerKeyBits
Jan 31 20:06:22 daedalus sshd[1913]: /etc/ssh/sshd_config line 35: Deprecated option RSAAuthentication
Jan 31 20:06:22 daedalus sshd[1913]: /etc/ssh/sshd_config line 42: Deprecated option RhostsRSAAuthentication
^^So the lines can be safely removed.

after remove and reload service i get only following error in syslog, if i open console on this node:

Code:
Jan 31 20:14:08 daedalus sshd[3162]: Connection closed by 217.88.200.67 port 34048 [preauth]
Jan 31 20:14:08 daedalus pmxcfs[2047]: [status] notice: received log

in auth.log i found only:
Code:
Jan 31 20:13:46 daedalus sshd[1913]: Received signal 15; terminating.
Jan 31 20:13:46 daedalus sshd[3065]: Server listening on 0.0.0.0 port 21739.
Jan 31 20:13:46 daedalus sshd[3065]: Server listening on :: port 21739.
Jan 31 20:13:55 daedalus sshd[3101]: Connection closed by 217.88.200.67 port 34034 [preauth]
Jan 31 20:14:08 daedalus sshd[3162]: Connection closed by 217.88.200.67 port 34048 [preauth]
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!