PVE 4.3 Console Access on Different Cluster Member

No solution yet. I'm starting to wonder if there's a fundamental issue in pveproxy security between machines. It isn't the SSH layer since the keys + known hosts are all in order.
Hi,
AFAIK is must have something to do with your ssh-connection, or the certificate.
Does the ssh-connection from eachnode to all other nodes work without fingerprint-question with ip and name?

Is the certicate valid and has the same certificate chain if you look on each host with
Code:
openssl s_client -connect proxmoxX.domain.com:8006
Udo
 
This is the error here:

Unsupported security types: [object Uint8Array]

First node:

Code:
Certificate chain
 0 s:/OU=PVE Cluster Node/O=Proxmox Virtual Environment/CN=proxmox1.example.com
  i:/CN=Proxmox Virtual Environment/OU=1b1181ce65330e048f1168ab0f4bd3f3/O=PVE Cluster Manager CA

Second node:

Code:
Certificate chain
 0 s:/OU=PVE Cluster Node/O=Proxmox Virtual Environment/CN=proxmox3.example.com
  i:/CN=Proxmox Virtual Environment/OU=1b1181ce65330e048f1168ab0f4bd3f3/O=PVE Cluster Manager CA

Third node:

Code:
Certificate chain
 0 s:/OU=PVE Cluster Node/O=Proxmox Virtual Environment/CN=proxmox4.example.com
  i:/CN=Proxmox Virtual Environment/OU=1b1181ce65330e048f1168ab0f4bd3f3/O=PVE Cluster Manager CA

SSH works without issues, from each node to each other.
 
SSH works flawlessly, no prompts for signatures nothing. As mentioned, the SSL certs are the standard pre-generated certificates, so if there's an issue with them it's coming straight out of the installer. I'll try generating new ones.

[scott@sfo-proxmox01] ~ $ openssl s_client -connect localhost:8006
CONNECTED(00000003)
depth=0 OU = PVE Cluster Node, O = Proxmox Virtual Environment, CN = proxmox01...
verify error:num=20:unable to get local issuer certificate
verify return:1
depth=0 OU = PVE Cluster Node, O = Proxmox Virtual Environment, CN = proxmox01...
verify error:num=27:certificate not trusted
verify return:1
depth=0 OU = PVE Cluster Node, O = Proxmox Virtual Environment, CN = proxmox01...
verify error:num=21:unable to verify the first certificate
verify return:1

Certificate chain
0 s:/OU=PVE Cluster Node/O=Proxmox Virtual Environment/CN=proxmox01....
i:/CN=Proxmox Virtual Environment/OU=7c5e4c35-9bcb-405a-bb74-94728953a953/O=PVE Cluster Manager CA

EDIT: I regenerated the certs using the instructions on the wiki, same situation.
 
Last edited:
Just an FYI to close this loop for the community. Dominik from support was able to help me solve this- THANK YOU!

The issue is that the VNC ticket handoff happens over SSH. In our configuration of SSH via chef, we accidentally removed a default setting that came from the default proxmox SSH config. Without the ‘SendEnv and AcceptEnv for LC_*’ settings, it fails to pass the VNC ticket.

Here are the required config lines to make this work again.

/etc/ssh/ssh_config:
Sendenv LANG LC_*

/etc/ssh/sshd_config:

Acceptenv LANG LC_*
 
  • Like
Reactions: morph027
Hmm... Weird.
I'm experiencing the same problem where I can't connect to a console on another cluster node.
As for rest of you I can ssh from the terminal just fine, and I have these ssh-settings already set by default after install.

/etc/ssh/ssh_config:
Sendenv LANG LC_*

/etc/ssh/sshd_config:
Acceptenv LANG L*

Doesn't matter what browser I use, same connection error.

Any idea what might be wrong?
Thanks for any hints and-or tip.

//AvG




root@dragonborn:~# pveversion -v
proxmox-ve: 6.3-1 (running kernel: 5.4.78-2-pve)
pve-manager: 6.3-3 (running version: 6.3-3/eee5f901)
pve-kernel-5.4: 6.3-3
pve-kernel-helper: 6.3-3
pve-kernel-5.3: 6.1-6
pve-kernel-5.4.78-2-pve: 5.4.78-2
pve-kernel-5.4.78-1-pve: 5.4.78-1
pve-kernel-5.3.18-3-pve: 5.3.18-3
pve-kernel-5.3.10-1-pve: 5.3.10-1
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.4-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: residual config
ifupdown2: 3.0.0-1+pve3
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.16-pve1
libproxmox-acme-perl: 1.0.5
libproxmox-backup-qemu0: 1.0.2-1
libpve-access-control: 6.1-3
libpve-apiclient-perl: 3.1-3
libpve-common-perl: 6.3-2
libpve-guest-common-perl: 3.1-3
libpve-http-server-perl: 3.0-6
libpve-storage-perl: 6.3-3
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.3-1
lxcfs: 4.0.3-pve3
novnc-pve: 1.1.0-1
proxmox-backup-client: 1.0.5-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.4-3
pve-cluster: 6.2-1
pve-container: 3.3-1
pve-docs: 6.3-1
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-3
pve-firmware: 3.1-3
pve-ha-manager: 3.1-1
pve-i18n: 2.2-2
pve-qemu-kvm: 5.1.0-7
pve-xtermjs: 4.7.0-3
qemu-server: 6.3-2
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 0.8.5-pve1
 
Doesn't matter what browser I use, same connection error.

Any idea what might be wrong?
Thanks for any hints and-or tip.
do you have any special bashrc configs for root ?
can you really ssh with the hostname and non interactive?
 
do you have any special bashrc configs for root ?
can you really ssh with the hostname and non interactive?

In /root/.bashrc I uncommented these:

Code:
export LS_OPTIONS='--color=auto'
eval "`dircolors`"
alias ls='ls $LS_OPTIONS'
alias ll='ls $LS_OPTIONS -al'
alias rm='rm -i'

Otherwise it's untouched.

I found the below command to test the, as I understood it, the batchmode connection vnc uses, in a similar thread.
Running it gives me access w/o password promptning, but the prompt looks kinda' funny.
Maybe because I use the batchmode -flag?
The commands are however executed properly I think.

You also mention hostname. I use the IP instead.
Is there a functional difference using IP instead of hostname when connecting to the other node's console through the web-GUI?



Code:
# ssh -T -o 'BatchMode=yes' 192.168.0.7
Linux smaug 5.4.73-1-pve #1 SMP PVE 5.4.73-1 (Mon, 16 Nov 2020 10:52:16 +0100) x86_64

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
ls -al
totalt 56
drwx------ 6 root root 4096 dec 13 15:45 .
drwxr-xr-x 18 root root 4096 nov 28 10:36 ..
-rw-r--r-- 1 root root 25 nov 28 10:35 .forward
...
df -h
Filsystem Storlek Använt Ledigt Anv% Monterat på
udev 1,6G 0 1,6G 0% /dev
tmpfs 324M 34M 291M 11% /run
/dev/mapper/pve-root 28G 3,2G 23G 13% /
...
 
You also mention hostname. I use the IP instead.
Is there a functional difference using IP instead of hostname when connecting to the other node's console through the web-GUI?
it makes a difference for ssh because of the known_hosts file which we autopopulate with the nodenames and not ips
so the correct option to test it is:

Code:
/usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=$TARGET_NODENAME' root@$TARGET_IP /bin/true
 
  • Like
Reactions: adrian_vg
it makes a difference for ssh because of the known_hosts file which we autopopulate with the nodenames and not ips
so the correct option to test it is:

Code:
/usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=$TARGET_NODENAME' root@$TARGET_IP /bin/true

Doh! Of course it does... I'll go stand in a corner for a while.

I'll see what I can do about this.
Thanks for pointing and reminding me in the right direction!
 
Last edited:
it makes a difference for ssh because of the known_hosts file which we autopopulate with the nodenames and not ips
so the correct option to test it is:

Code:
/usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=$TARGET_NODENAME' root@$TARGET_IP /bin/true

I fixed it. It works now.
Thanks for pointing out the obvious to me! :)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!