Cannot ssh betwixt cluster members

stuartbh

Active Member
Dec 2, 2019
119
9
38
59
All,

I have two cluster members which seem to be able to ping each other and I can telnet to each of their ports 8006 and 22 between them but I cannot ssh from either one of them to the other. This started occurring after upgrading to PVE 8.

Even if shutting the firewalls of on both still ssh does not work. I attempted to ssh -vvvv and saw it stopped at:
debug3: send packet: type 30
debug1: expecting SSH2_MSG_KEX_ECDH_REPLY

timeout after that.

Stuart
 
Moayad,

Yes, they both can ping their gateway and in fact both can obtain updates as well.

VLANs are employed and the node that cannot be ssh'ed onto right now (even as root which does not require any concern about the FreeIPA server I run for user management) and can be pinged from a workstation on an access port using the same VLAN as the misbehaving node is running on. I thus conclude that VLANs are not the issue. Moreover, the same workstation can telnet to port 22 and port 8006 and gain a connection to the misbehaving node. However, I cannot create an SSH connection to the same node nor can it be seen when I login to the GUI of any other node. Additionally, "sudo pvecm nodes" shows all 5 nodes (inclusive of the misbehaving node) having 1 vote.


Stuart
 
Last edited:
Thank you for the additional info!

Can you please run the following command on one node, and then test the connection?

Bash:
pvecm updatecerts --force
 
Moayad,

It is worthy of notation that I did try that command before posting here (granted, I did so absent the additional '--force' option you provided). In the past it had no impact, now succeeding its execution one more node now is not providing information via the console and that node yields a connection error 596. The original node has no error and still seems to not be responsive to the GUI or being SSH'ed into.

The pve4 node that stopped working only after running the command above was rebooted and that seems to have brought it back to a normal state. The pve5 node that was not working (and prompted my posting here) was rebooted as well, but it still remains dysfunctional.

This command seems to have made things worse (until the system that I ran it on was rebooted). Is there some troubleshooting we should try perhaps in precedence to trying anything else first?

Stuart
 
Last edited:
@neodemus You are my hero for linking to that post. This was driving me insane.

To all those future internet strangers out there, it's jumbo frames. Disable them and you should be back to normal.

It's curious that I was using a cheap no name switch previously and it was working fine but I just upgraded my rack and put in a Cisco switch to learn IOS and it wouldn't work with it. I would think it would have been the other way around.
 
I have PVE 8.0.4 running and have been using MTU 9000 on all interfaces for years. This is not a problem with Arista switches.

The problem is probably more that manufacturers implement it differently and some specify a larger value. It's best to check the switch config and see if packets aren't getting lost.
 
@neodemus You are my hero for linking to that post. This was driving me insane.

To all those future internet strangers out there, it's jumbo frames. Disable them and you should be back to normal.

It's curious that I was using a cheap no name switch previously and it was working fine but I just upgraded my rack and put in a Cisco switch to learn IOS and it wouldn't work with it. I would think it would have been the other way around.

Jumbo frames could be important to network performance, depending on what you do with your Proxmox. I would disable them only if MTU on the switch cannot be increased. I had the same problem years ago on a Dell blade chassis M1000e with Dell 10/40 MXL switches (ex-Force10), but I easily fixed it by increasing MTU to 12000 on trunk interfaces Proxmox blades were connected to. Not that every switch allows MTU to be set that high, but it should be possible to set it at least to 9000.

The problem is - different platforms/oses, different vendors and even different devices from the same vendor use the MTU setting ambiguously. In reality, there are multiple MTUs in play, most notably, a layer 2 MTU that applies to any ethernet frame (or any other L2 protocol) and a layer 3 MTU, so called "IP MTU", that applies to IP packet size, including the IP header. This is why, for example, you can set MTU to 9216 on a trunk port on a Cisco IOS (XE) router and then set "ip mtu 1500" on its L3 subinterface. If you don't set "ip mtu", the subinterface will use main interface's MTU. The same goes for, say, Juniper devices where you can set mtu on interface level and then separately for each family section. Why is this important ? Because a single interface can carry more than just IP traffic and while it is ok for IP traffic to have MTU 1500, other types of traffic may require more, due to different defaults, encapsulation, etc. For example, for an LDP-based MPLS pseudowire to pass, L2 MTU of an interface has to be at least 1522 bytes (1514 encapsulated untagged ethernet frame + 4 bytes VC tag + 4 bytes outer tag), but more likely 1530 (1518 encapsulated tagged ethernet frame + 4 bytes control word + 4 bytes VC tag + 4 bytes outer tag). Yet the IP MTU of that same interface can (and probably should) be explicitly set to 1500. Many more follow the same pattern, but not all. I haven't been playing with Linux networking all that much, but AFAIK, on Linux, you cannot set L2 MTU and IP MTU on the same interface separately - only the interface (L2) MTU, but you can use subinterfaces for different purposes, each with it's own MTU.

In normal circumstances, the default IP MTU is 1500 on ethernet networks for just about any device, so setting it higher on public/internet facing interfaces would lead to problems - the least of which being decreased performance and increased load due to fragmentation. However, on isolated segments, like, for example, a dedicated Proxmox cluster network or network between Proxmox nodes and a NAS/SAN device, using high IP MTU is recommended for performance. This involves setting IP MTU on all interfaces connected to that segment, but also (L2) MTU on switches those interfaces are physically connected to. Now, there's another ambiguity - some platforms expect raw MTU values (including the frame header size), some just the payload size (making you account for the header size yourself when calculating the payload size in relation to the L2 MTU of the switch). For example, IOS XR devices expect raw L2 MTU on l2transport interfaces, which would normally be 1514 (1500 bytes default payload size + 14 bytes ethernet frame header) on untagged interfaces and 1518 on tagged subinterfaces (1500 bytes default payload size + 14 bytes header + 4 bytes 802.1q tag), but on L3 (sub)interfaces, they expect just the payload size, which would normally be 1500.

My point is - when MTU is concerned, there is no "one size fits all" approach. You have to get to know the devices you are working with and understand how end-to-end MTU across different devices reflects on allowed packet sizes and overall performance, or you have to experiment and hit that sweetspot by trial and error.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!