Hi,
I've set up a two node cluster using a dedicated network, as seen on Proxmox Wiki: https://pve.proxmox.com/wiki/Cluster_Manager
Dedicated network range: 10.17.14.0/24
Node 1 IP: 10.17.14.1
Node 2 IP: 10.17.14.2
When creating the cluster, on the first node I did:
And for joining the second node:
And that's the output of /etc/pve/corosync.conf
And that's the output of /etc/pve/.members:
As you can see, on .members, public IPs are listed instead of the dedicated cluster network ones.
Is that normal?
I've had to add a firewall rule on both nodes allowing to pass all traffic between them on public IPs.
If I only set a rule allowing all traffic on 10.17.14.0/24 range between nodes, I cannot view or config VMs running on the other node. For example, I cannot manage a VM stored on node 2 from node 1.
Thank you all
Regards
I've set up a two node cluster using a dedicated network, as seen on Proxmox Wiki: https://pve.proxmox.com/wiki/Cluster_Manager
Dedicated network range: 10.17.14.0/24
Node 1 IP: 10.17.14.1
Node 2 IP: 10.17.14.2
When creating the cluster, on the first node I did:
Code:
pvecm create mycluster --ring0_addr 10.17.14.1 --bindnet0_addr 10.17.14.0
And for joining the second node:
Code:
pvecm add 10.17.14.1 -ring0_addr 10.17.14.2
And that's the output of /etc/pve/corosync.conf
Code:
logging {
debug: off
to_syslog: yes
}
nodelist {
node {
name: node1
nodeid: 1
quorum_votes: 1
ring0_addr: 10.17.14.1
}
node {
name: node2
nodeid: 2
quorum_votes: 1
ring0_addr: 10.17.14.2
}
}
quorum {
provider: corosync_votequorum
}
totem {
cluster_name: mycluster
config_version: 4
ip_version: ipv4
secauth: on
version: 2
interface {
bindnetaddr: 10.17.14.0
ringnumber: 0
}
}
And that's the output of /etc/pve/.members:
Code:
{
"nodename": "node1",
"version": 4,
"cluster": { "name": "mycluster", "version": 4, "nodes": 2, "quorate": 1 },
"nodelist": {
"node1": { "id": 1, "online": 1, "ip": "37.xx.xx.xx"},
"node2": { "id": 2, "online": 1, "ip": "5.xx.xx.xx"}
}
}
As you can see, on .members, public IPs are listed instead of the dedicated cluster network ones.
Is that normal?
I've had to add a firewall rule on both nodes allowing to pass all traffic between them on public IPs.
If I only set a rule allowing all traffic on 10.17.14.0/24 range between nodes, I cannot view or config VMs running on the other node. For example, I cannot manage a VM stored on node 2 from node 1.
Thank you all
Regards