Hi Proxmox-Community,
[ Duplicate of https://forum.proxmox.com/threads/proxmox-cluster-network-communication.134758/ because it was in the wrong forum ]
I've finally decided to upgrade from a single node to a cluster (both run on Hetzner servers).
The goal would be that the public IPs of the second node do not have any Proxmox ports open (e.g. 8006) and all communication uses the private network range (on the same physical link).
So I would like to have only two public ports open, one for the PVE GUI and one for the PBS GUI.
So in my ideal word I would not need any public communication between the two PVE hosts and the PBS server, everything is using the 172.x.x.x range between the nodes and for the backup traffic.
Of course, as it's a Hetzner setup, this is all configured on a single physical NIC.
Is this in general a setup which is possible, recommended, or even just a bad idea because there is no real benefit except not needing to block all the traffic on the Hetzner firewall? I just would like to have the private traffic (replication, backup, SSH for Migration...) on the private IP range.
So I've created a vSwitch on Hetzner to connect them with private IPs.
Corosync and the cluster are using those IPs:
In the /etc/hosts file of each node, I have added a line with the private IP of the other node, so a "ping node" uses this private vSwitch network as well.
But for some reason, the Web UI calls to the main node https://mainnode.com:8006/api2/json/nodes/node02/status seems still to use the public IPs of node02 in the background.
This might also be true as the /etc/pve/.members still shows the public IPs for the nodes. For the node of the file itself it's fine but I would like to use the internal network for the cross-node communication then I could block all public ports on the node.
Is this possible at all, or do I miss something.
And of course, one PVE host will need a public IP so I can reach the GUI at all, as the OpenVPN access server is running of course in a container, but it might crash or not start correctly.
Cheers,
Andy
[ Duplicate of https://forum.proxmox.com/threads/proxmox-cluster-network-communication.134758/ because it was in the wrong forum ]
I've finally decided to upgrade from a single node to a cluster (both run on Hetzner servers).
The goal would be that the public IPs of the second node do not have any Proxmox ports open (e.g. 8006) and all communication uses the private network range (on the same physical link).
So I would like to have only two public ports open, one for the PVE GUI and one for the PBS GUI.
So in my ideal word I would not need any public communication between the two PVE hosts and the PBS server, everything is using the 172.x.x.x range between the nodes and for the backup traffic.
Of course, as it's a Hetzner setup, this is all configured on a single physical NIC.
Is this in general a setup which is possible, recommended, or even just a bad idea because there is no real benefit except not needing to block all the traffic on the Hetzner firewall? I just would like to have the private traffic (replication, backup, SSH for Migration...) on the private IP range.
So I've created a vSwitch on Hetzner to connect them with private IPs.
Corosync and the cluster are using those IPs:
Code:
Cluster information
-------------------
Name: cluster
Config Version: 2
Transport: knet
Secure auth: on
Quorum information
------------------
Date: Wed Oct 11 11:50:07 2023
Quorum provider: corosync_votequorum
Nodes: 2
Node ID: 0x00000001
Ring ID: 1.20
Quorate: Yes
Votequorum information
----------------------
Expected votes: 2
Highest expected: 2
Total votes: 2
Quorum: 2
Flags: Quorate
Membership information
----------------------
Nodeid Votes Name
0x00000001 1 172.16.1.1 (local)
0x00000002 1 172.16.1.2
Code:
nodelist {
node {
name: virtual
nodeid: 1
quorum_votes: 1
ring0_addr: 172.16.1.1
}
node {
name: virtual02
nodeid: 2
quorum_votes: 1
ring0_addr: 172.16.1.2
}
}
In the /etc/hosts file of each node, I have added a line with the private IP of the other node, so a "ping node" uses this private vSwitch network as well.
But for some reason, the Web UI calls to the main node https://mainnode.com:8006/api2/json/nodes/node02/status seems still to use the public IPs of node02 in the background.
This might also be true as the /etc/pve/.members still shows the public IPs for the nodes. For the node of the file itself it's fine but I would like to use the internal network for the cross-node communication then I could block all public ports on the node.
Is this possible at all, or do I miss something.
And of course, one PVE host will need a public IP so I can reach the GUI at all, as the OpenVPN access server is running of course in a container, but it might crash or not start correctly.
Cheers,
Andy