Proxmox Cluster Network Communication

Jul 24, 2023
14
1
3
Hi Proxmox-Community,

[ Duplicate of https://forum.proxmox.com/threads/proxmox-cluster-network-communication.134758/ because it was in the wrong forum ]

I've finally decided to upgrade from a single node to a cluster (both run on Hetzner servers).
The goal would be that the public IPs of the second node do not have any Proxmox ports open (e.g. 8006) and all communication uses the private network range (on the same physical link).
So I would like to have only two public ports open, one for the PVE GUI and one for the PBS GUI.

So in my ideal word I would not need any public communication between the two PVE hosts and the PBS server, everything is using the 172.x.x.x range between the nodes and for the backup traffic.

Of course, as it's a Hetzner setup, this is all configured on a single physical NIC.

Is this in general a setup which is possible, recommended, or even just a bad idea because there is no real benefit except not needing to block all the traffic on the Hetzner firewall? I just would like to have the private traffic (replication, backup, SSH for Migration...) on the private IP range.

So I've created a vSwitch on Hetzner to connect them with private IPs.
Corosync and the cluster are using those IPs:


Code:
Cluster information
-------------------
Name:             cluster
Config Version:   2
Transport:        knet
Secure auth:      on

Quorum information
------------------
Date:             Wed Oct 11 11:50:07 2023
Quorum provider:  corosync_votequorum
Nodes:            2
Node ID:          0x00000001
Ring ID:          1.20
Quorate:          Yes

Votequorum information
----------------------
Expected votes:   2
Highest expected: 2
Total votes:      2
Quorum:           2
Flags:            Quorate

Membership information
----------------------
    Nodeid      Votes Name
0x00000001          1 172.16.1.1 (local)
0x00000002          1 172.16.1.2

Code:
nodelist {
  node {
    name: virtual
    nodeid: 1
    quorum_votes: 1
    ring0_addr: 172.16.1.1
  }
  node {
    name: virtual02
    nodeid: 2
    quorum_votes: 1
    ring0_addr: 172.16.1.2
  }
}

In the /etc/hosts file of each node, I have added a line with the private IP of the other node, so a "ping node" uses this private vSwitch network as well.

But for some reason, the Web UI calls to the main node https://mainnode.com:8006/api2/json/nodes/node02/status seems still to use the public IPs of node02 in the background.

This might also be true as the /etc/pve/.members still shows the public IPs for the nodes. For the node of the file itself it's fine but I would like to use the internal network for the cross-node communication then I could block all public ports on the node.

Is this possible at all, or do I miss something.
And of course, one PVE host will need a public IP so I can reach the GUI at all, as the OpenVPN access server is running of course in a container, but it might crash or not start correctly.

Cheers,
Andy
 
Once /etc/hosts is correctly modified: systemctl restart pve-cluster on each node. Wait for it start correctly before restarting it in the other node. That should place the correct entries in /etc/pve/.members.

A two node cluster is a bad idea: you will have no quorum if any node goes down. At the very least, place a Qdevice [1] in that PBS server (seems to be some bug on v8.2 though, so it might not work at the moment, unfortunately). Also, if using a single link do not use HA to avoid nodes self fencing if network is down / congested.

[1] https://pve.proxmox.com/wiki/Cluster_Manager#_corosync_external_vote_support
[2] https://forum.proxmox.com/threads/proxmox-8-2-2-qdevice.145988/post-663827
 
I thought restarting helps of course, restarted the nodes itself as well, but maybe I should just restart the pve-cluster service itself instead of rebooting the nodes..

I'm anyway not sure if this is really a desired setup - it's just my gut feeling where years ago we've set up the ESXi environments that way, just with dedicated NICs for example not with shared NICs..

172.16.1.2 virtual02.example.com virtual02

But of course, the virtual01 node for its own hostname has the public IP configured (so it is still reachable), probably I have to change that as well because as long as the services for 8006 listen on 0.0.0.0 it will be reachable anyway..

136.243.x.x virtual.adlsrv.com virtual
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!