Unable to connect other nodes vm console

tlex

Member
Mar 9, 2021
92
8
13
42
I have this strange issue since recently (don't know when that started) accessing my vms console on my other nodes when connected on the gui from the node1. My cluster has 3 nodes in total and the following is working :

-Accessing vm console of node1 from node1 gui (local)
-Accessing vm console of node3 from node1 gui

What is not working :
-Accessing vm console of node1 from node2 gui
-Accessing vm console of node1 from node3 gui
-Accessing vm console of node2 from node1 gui
-Accessing vm console of node2 from node3 gui


So yes I can access VMs of node3 from node1 but not the opposite.

Thats the kind of logs I get : (it seems to connect and then half sec after it disconnect)
tail -f /var/log/auth.log

May 2 14:04:27 pve2 sshd[5910]: Accepted publickey for root from 10.32.50.8 port 38288 ssh2: RSA SHA256:ilA1TWTUx+7uZvgzRyAIYUAlOPp2y4W7DJRN5LecPvg
May 2 14:04:27 pve2 sshd[5910]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
May 2 14:04:27 pve2 systemd-logind[566]: New session 11 of user root.
May 2 14:04:29 pve2 sshd[5910]: Received disconnect from 10.32.50.8 port 38288:11: disconnected by user

May 2 14:04:29 pve2 sshd[5910]: Disconnected from user root 10.32.50.8 port 38288
May 2 14:04:29 pve2 sshd[5910]: pam_unix(sshd:session): session closed for user root
May 2 14:04:29 pve2 systemd-logind[566]: Session 11 logged out. Waiting for processes to exit.
May 2 14:04:29 pve2 systemd-logind[566]: Removed session 11.

tail -f /var/log/syslog (I know the time is different but I just retried and added this log to this thread):
May 2 14:18:14 pve2 pmxcfs[9345]: [status] notice: received log
May 2 14:18:14 pve2 pmxcfs[9345]: [status] notice: received log
May 2 14:18:14 pve2 systemd[1]: Started Session 16 of user root.
May 2 14:18:16 pve2 systemd[1]: session-16.scope: Succeeded.
May 2 14:18:16 pve2 systemd[1]: session-16.scope: Consumed 1.137s CPU time.
May 2 14:18:16 pve2 pmxcfs[9345]: [status] notice: received log

I can ssh from all the 3 nodes to all the 3 nodes.

Any idea where I should investigate ?
Time is synched between the 3 nodes with the same ntp server

pvecm status
Cluster information
-------------------
Name: Maison
Config Version: 5
Transport: knet
Secure auth: on

Quorum information
------------------
Date: Mon May 2 14:21:07 2022
Quorum provider: corosync_votequorum
Nodes: 3
Node ID: 0x00000002
Ring ID: 1.6d3
Quorate: Yes

Votequorum information
----------------------
Expected votes: 3
Highest expected: 3
Total votes: 3
Quorum: 2
Flags: Quorate

Membership information
----------------------
Nodeid Votes Name
0x00000001 1 10.32.50.6
0x00000002 1 10.32.50.7 (local)
0x00000003 1 10.32.50.8

1651514846637.png
 
Last edited:
OK I found it... I was running NeoFetch on these nodes :(
Maybe this could be documented (sticky) that customizing .bashrc can produce that behavior ?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!