PVE 4.3 Console Access on Different Cluster Member

ScottR

Member
Dec 27, 2015
32
1
8
Somewhere around version 3.4, the default behavior for opening consoles across a cluster changed. If you have a three node cluster, and are currently connected to say node01, the UI will generate authentication errors if you attempt to connect to a VM on any other node. In the past, this used to come up with a specific security type 19 error, now it's reporting:

Unsupported security types: [object Uint8Array]

** NOTE ** New thread based on Tom's feedback.

> this works here, so I assume you have a cluster communication problem somewhere. please open a new
> thread as this one is old and seems unrelated.

Can you elaborate on the cluster communication channel needed for this to work?
 
Another data point for this problem. On the same cluster where this breaks for KVM, it works fine for LXC. It must be something associated with the NoVNC subsystem. Any help would be appreciated.
 
do you use custom https certificates?
 
do you use custom https certificates?

Nothing exotic. We're still on the original self signed certificates.

EDIT: If I setup a default cluster with a new install, I also see the behavior with KVM. I don't think it's a case of expired/invalid certificates.
 
Last edited:
I am encountering exactly the same issue with op. I have tried a lot of ways: Install self certificate, letsencrypt cert, CA cert but got no luck. I have seen several people having the same issues but no one got a solution.
I know the issue lie on the unmatched security type between Websockify and Pveproxy (details below) but don't know how to fix it. Any idea?

More detail:
Web browser -> NoVNC(GUI node)->Websockify(GUI node)->Pveproxy(Remote node)->VNC Server (Remote node)
_____________________________________________________________^
_____________________________________________________________The issue seems to be here
 
Last edited:
what browser/os do you use?
 
It's identical behavior on all browsers on the OSX platform (FF, Chrome, Safari). It also happens with the same browsers on Windows as well as internet explorer/edge. I can't set up a cluster where this doesn't happen, going all the way back to 3.4.
 
this is weird, because i have yet to see this problem here. how does your network look like? do you have anything special installed? how does your
/etc/hosts look like?

i have here a cluster of 3 machines (x.x.x.71,x.x.x.72,x.x.x.73)
i connect to the first one via the web gui, open the console no problem of a vm running on the third or second one.
 
Are you testing with KVM or are you testing with LXC? With containers it works fine, you need to open a console session to a KVM VM. The contents of /etc/hosts is fairly standard.

10.12.1.217 proxmox01.redacted proxmox01 pvelocalhost
10.12.1.218 proxmox02.redacted proxmox02
10.12.1.219 proxmox03.redacted proxmox03

In the screenshot below, we're connected to proxmox01, connecting a console on proxmox03.

All systems are using PAM for UI login, same user different passwords. However, we have the same behavior as if they were the same passwords.

Cluster is communicating correctly.

Quorum information
------------------
Date: Thu Oct 20 11:14:40 2016
Quorum provider: corosync_votequorum
Nodes: 3
Node ID: 0x00000001
Ring ID: 1/148
Quorate: Yes

Votequorum information
----------------------
Expected votes: 3
Highest expected: 3
Total votes: 3
Quorum: 2
Flags: Quorate

Membership information
----------------------
Nodeid Votes Name
0x00000001 1 10.12.1.217 (local)
0x00000002 1 10.12.1.218
0x00000003 1 10.12.1.219
 

Attachments

  • Screenshot 2016-10-20 15.12.54.png
    Screenshot 2016-10-20 15.12.54.png
    47.7 KB · Views: 16
Do you have any hints on connection process. I.E. is it passing an auth cookie via the browser, or is it doing some kind of handshake at the network level. The cluster works fine, and there's full connectivity via SSH.

The only thing that I do think that could be related is the REST API. When you want to clone from a template on node proxmox01 to proxmox03 you have to connect to proxmox01 with the API. If you use pvesh this doesn't seem to be a requirement. Maybe the same authorization problem is creating both situations?

It would seem plausible that it's a partial authorization issue on pveproxy across hosts, operations like start/stop work fine.
 
yes i am using a kvm vm, no problems there.

the connection works something like this:

novnc -> node(local) -> nc6 (listen to port) -> shh -> node(remote) -> qm vncproxy -> qemu

can you post the output of
Code:
pveversion -v

on all relevant nodes?
 
SSH is an interesting point. Is there a specific user that's used or is it the root user cluster channel? We do implement fail2ban and some additional users with SSH keys via chef, but it's never affected the cluster. For example, all SSH communication works fine with the SSH keys implemented by the pvecm setup.

[root@proxmox03] /home/scott $ ssh proxmox01

The programs included with the Debian GNU/Linux system are free software;
...

All nodes have identical packages:

proxmox-ve: 4.3-66 (running kernel: 4.4.19-1-pve)
pve-manager: 4.3-3 (running version: 4.3-3/557191d3)
pve-kernel-4.4.16-1-pve: 4.4.16-64
pve-kernel-4.4.19-1-pve: 4.4.19-66
lvm2: 2.02.116-pve3
corosync-pve: 2.4.0-1
libqb0: 1.0-1
pve-cluster: 4.0-46
qemu-server: 4.0-91
pve-firmware: 1.1-9
libpve-common-perl: 4.0-75
libpve-access-control: 4.0-19
libpve-storage-perl: 4.0-66
pve-libspice-server1: 0.12.8-1
vncterm: 1.2-1
pve-qemu-kvm: 2.6.2-2
pve-container: 1.0-78
pve-firewall: 2.0-31
pve-ha-manager: 1.0-35
ksm-control-daemon: 1.2-1
glusterfs-client: 3.7.15-1
lxc-pve: 2.0.5-1
lxcfs: 2.0.4-pve1
criu: 1.6.0-1
novnc-pve: 0.5-8
 
SSH is an interesting point. Is there a specific user that's used or is it the root user cluster channel? We do implement fail2ban and some additional users with SSH keys via chef, but it's never affected the cluster. For example, all SSH communication works fine with the SSH keys implemented by the pvecm setup.
no we use the root user for this

but maybe fail2ban adds some firewall rules which disrupt the nc ssh connection ?
 
no we use the root user for this

but maybe fail2ban adds some firewall rules which disrupt the nc ssh connection ?

I'll review it's settings and try disabling it to make sure it's not the culprit. However, i'm still wondering if something else could cause this. Fail2ban only kicks in if someone is brute forcing causing auth failures. By itself there won't be any action, the default rules are allow until it inserts a block for a succession of failed logins. I set up a fresh cluster without any ssh modifications and it still happened.

Are there any specific logs I can look at that would show errors in pveproxy ?
 
I've removed fail2ban, and verified there were no remaining iptables rules. I also reverted to the default sshd config and still have the same behavior. Outside this, we're on a completely standard proxmox install.
 
hmm, this is really strange....

what does your network look like?
 
All of the systems are subsequent IPs in the same 10.12.0.0/24 network connected to the same switch. I can reach *any* system just by using SSH to the hostname as root. No key signature problems nothing. There's no iptables rules defined on host, and I reverted all sshd_config changes to the debian defaults. Color me puzzled, it's clearly something in our process of installing proxmox. I can provide our internal documentation on the install process offline if you would like to review in case there's a step that's different. FWIW we base everything off the wiki documentation.

Note: We're processing a PO for the cluster support internally right now. If you want to take it offline, with a ticket that's fine. We can post back the solution here for anyone following.
 
@ScottR Did you resolve this issue? I am having the same issue. I am not able to get noVNC console working for LXCs running on the nodes other than the node the browser GUI is running from.
 
No solution yet. I'm starting to wonder if there's a fundamental issue in pveproxy security between machines. It isn't the SSH layer since the keys + known hosts are all in order.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!