Cluster use public IP instead of private

IWIOS

Member
Dec 3, 2021
18
0
6
33
Hello,

I setup new cluster, with new hardware.

I have 2 nodes, each nodes have 2 interfaces, 1 with public IP and one with private IP.

When I setup cluster on node 1, I setup with vmbr which have private network.
When I join cluster, I use cli to specify link0 with internal IP

So, when I open shell from web interface on node 2 to node 1, I saw with ss commdn, the ssh is etablished in public IP, when I launch shell from node 1 to node 2, with ss command I see the connection is etablished on internal network.

If I open shell web in node 2 and ssh root@node1, I use the internal IP...
That's a little bit disrupting

Why this case? And how I can chged it, to force ssh frome node 2 to node 1 in internal.

Thanks for helping me
 
the cluster network is only relevant for corosync. most other things just rely on DNS (or /etc/hosts), sometimes with an option to override it (e.g., migration has a separate setting to force it to use a different network/link).
 
I setup host with local IP.
So I uderstand it's normal with proxmox to use Public IP to ssh between 2 nodes in the same cluster?
 
PVE will use whatever the other node's hostname resolves to for intra-node connections (via SSH or the API), unless there is an override for that specific task (like replication or migration).