Hi,
We've been having some serious connection issues with our two node proxmox cluster while they were connected using 10Gb SFP link, so we decided to get a separated (direct) 1gbit RJ45 connection for the nodes to communicate and leave that 10gbit link exclusively for the shared storage NFS server.
My question is, was it enough to simply change the entries in /etc/hosts and /etc/network/interfaces for each node, or should we've done something more?
Here's our current pvecm output for one of the two nodes:
pvecm status
Version: 6.2.0
Config Version: 26
Cluster Name: proxmox-cluster
Cluster Id: 39604
Cluster Member: Yes
Cluster Generation: 288
Membership state: Cluster-Member
Nodes: 2
Expected votes: 3
Quorum device votes: 1
Total votes: 3
Node votes: 1
Quorum: 2
Active subsystems: 8
Flags:
Ports Bound: 0 177 178
Node name: prox-node0002
Node ID: 2
Multicast addresses: 239.192.154.79
Node addresses: 10.10.11.222
Previously the node's address was 10.10.10.222 so as you can see we've changed the subnet. There's another subnet 10.10.10/24 where our NFS server "lives" (which also hosts our quorum disk). Was that enough to separate nodes traffic from storage traffic? Or is there something else we should tweak to make sure that inter-nodes traffic stays separated from the storage?
We've been having some serious connection issues with our two node proxmox cluster while they were connected using 10Gb SFP link, so we decided to get a separated (direct) 1gbit RJ45 connection for the nodes to communicate and leave that 10gbit link exclusively for the shared storage NFS server.
My question is, was it enough to simply change the entries in /etc/hosts and /etc/network/interfaces for each node, or should we've done something more?
Here's our current pvecm output for one of the two nodes:
pvecm status
Version: 6.2.0
Config Version: 26
Cluster Name: proxmox-cluster
Cluster Id: 39604
Cluster Member: Yes
Cluster Generation: 288
Membership state: Cluster-Member
Nodes: 2
Expected votes: 3
Quorum device votes: 1
Total votes: 3
Node votes: 1
Quorum: 2
Active subsystems: 8
Flags:
Ports Bound: 0 177 178
Node name: prox-node0002
Node ID: 2
Multicast addresses: 239.192.154.79
Node addresses: 10.10.11.222
Previously the node's address was 10.10.10.222 so as you can see we've changed the subnet. There's another subnet 10.10.10/24 where our NFS server "lives" (which also hosts our quorum disk). Was that enough to separate nodes traffic from storage traffic? Or is there something else we should tweak to make sure that inter-nodes traffic stays separated from the storage?