[SOLVED] Setuped a cluster with 3 nodes - Spice-Remoteviewer works only on one node. xterm.js only on ipv4

fireon

Distinguished Member
Oct 25, 2010
4,097
378
153
41
Austria/Graz
iteas.at
Hello all,

i setuped a cluster with 3 nodes. All nodes ZFS. On some CT's i would like to use Spice. It works but if i would like to open it with the Spice-Remoteviewer, it works only on one host. If choose a container from another host, spice didn't open, no "download.vv" will be downloaded. I get only this error in the PVE Webinterface:

Timeout while waiting for port '61001' to get ready! (500)

On VM's (Qemu) it works fine from all nodes fine. Only LXC didn't work with.
Can anyone tell me how i get things work on all hosts?

Code:
pve-manager/5.3-11/d4907f84 (running kernel: 4.15.18-12-pve)

Thanks :)
 
works here, are the hosts allowed to connect to port 3128 to each other?
 
are those hosts configured with ipv6?
 
  • Like
Reactions: fireon
are those hosts configured with ipv6?
Sorry, my fault i forget to write. All hosts are confgured on ipv6 only. One, that works with LXC and spice has also IPV4. But all hosts listen to spice port on both, IPV4 and 6. Port is open. KVM works.
 
mhmm, i have a suspicion what it could be.. can you open a bug report while i investigate this?
 
If i go on a node that have only ein IPV6 adress, with xterm.js it is similar.
Code:
Authentication failed: '500 Can't connect to 127.0.0.1:85'
TASK ERROR: command '/usr/bin/termproxy 5902 --path /vms/102 --perm VM.Console -- /usr/bin/dtach -A /var/run/dtach/vzctlconsole102 -r winch -z lxc-console -n 102 -e -1' failed: exit code 22
From the node that have dualstack, xterm.js works to all other nodes (also with ipv6 only) fine.
 
Attachment.
 

Attachments

  • pve03.txt
    3.2 KB · Views: 4
  • pve02.txt
    1.8 KB · Views: 2
  • pve01.txt
    3.5 KB · Views: 3
mhmm... looks ok, are you sure you tested the patches? how did you apply them? did you restart the daemons afterwards?
 
if you simply changed the files, you also have to restart pvedaemon
 
Tested again, same result. KVM works, LXC not.
ok you have me stumped there... anything that might affect a connection/listening on localhost on your setup? (firewall etc.) ?
 
Found the problem. I had to change all the line in /etc/hosts from
Code:
::1     ip6-localhost ip6-loopback
to
Code:
::1     ip6-localhost ip6-loopback localhost
After that it works. Hope that has no other bad effects.

xterm.js didn't work on ipv6. Maybe this is a similar problem like that spice one. (i've changed the thread title)
Code:
Apr 03 18:49:46 pve03 pvedaemon[24591]: command '/usr/bin/termproxy 5901 --path /vms/102 --perm VM.Console -- /usr/bin/dtach -A /var/run/dtach/vzctlconsole102 -r winch -z lxc-console -n 102 -e -1' failed: exit code 22
Apr 03 18:49:46 pve03 pvedaemon[29089]: <root@pam> end task UPID:pve03:0000600F:01B42E66:5CA4E429:vncproxy:102:root@pam: command '/usr/bin/termproxy 5901 --path /vms/102 --perm VM.Console -- /usr/bin/dtach -A /var/run/dtach/vzctlconsole102 -r winch -z lxc-console -n 102 -e -1' failed: exit code 22
Maybe here...
Code:
/usr/share/perl5/PVE/ grep -R 127.0.0.1 *
CLI/termproxy.pm:    my $res = $ua->post ('http://127.0.0.1:85/api2/json/access/ticket', Content => $params);
Cluster.pm:    my $names = "IP:127.0.0.1,IP:::1,DNS:localhost";
LXC/Setup/Base.pm:    my $lo4 = "127.0.0.1 localhost.localnet localhost\n";
Service/pvedaemon.pm:    my $socket = $self->create_reusable_socket(85, '127.0.0.1');
Storage/GlusterfsPlugin.pm:     if ($server && $server ne 'localhost' && $server ne '127.0.0.1' && $server ne '::1') {
 
Last edited:
  • Like
Reactions: fr3d3r1c
After that it works. Hope that has no other bad effects.
weird, that should not have made it work more than before, since you had the 'localhost' line on 127.0.0.1
so in theory the server wanted to listen on localhost (=> 127.0.0.1) and the client wanted to connect to localhost (=> 127.0.0.1). or did you something to disable ipv4? (unlikely, since i saw the 127.0.0.1 in your ip addr output)

xterm.js didn't work on ipv6. Maybe this is a similar problem like that spice one. (i've changed the thread title)
for that i sent a different patch (simultaneously with the spiceterm patch; both of which got applied) : https://pve.proxmox.com/pipermail/pve-devel/2019-March/036160.html
 
weird, that should not have made it work more than before, since you had the 'localhost' line on 127.0.0.1
so in theory the server wanted to listen on localhost (=> 127.0.0.1) and the client wanted to connect to localhost (=> 127.0.0.1). or did you something to disable ipv4? (unlikely, since i saw the 127.0.0.1 in your ip addr output)
Not weird, because it can't work with this entry. There is an 127.0.0.1 available if you use ipv6 only. And if you do an ping you get this back:
Code:
PING localhost.localdomain (127.0.0.1) 56(84) bytes of data.
64 bytes from localhost.localdomain (127.0.0.1): icmp_seq=1 ttl=64 time=0.018 ms
But i think on ipv6 only you should get this back for working:
Code:
ping localhost
PING localhost(ip6-localhost (::1)) 56 data bytes
64 bytes from ip6-localhost (::1): icmp_seq=1 ttl=64 time=0.018 ms
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!