multiple Nodes, VMs doesnt communicate to the other Nodes

pille99

Active Member
Sep 14, 2022
360
28
28
hello all,

following:
node1: vm_a
node2: vm_b
vmbrxx: same subnet on both
if i migrate, the vm is directly online again.
but, for instance, on node1 is the DNS server, this server needs to be reachable from anywhere inside the Cluster. but it doesnt,
from my understanding, which is the mainpoint of a cluster, the VM needs to be moved around and get everywhere a connection (it needs to work because ceph is communicating with all nodes). why it doesnt work for VMs ?

i dont have anything special configured. just pretty basic
auto lo
iface lo inet loopback

auto enp41s0
iface enp41s0 inet manual
#1GB UPLINK

auto enp33s0f0
iface enp33s0f0 inet static
address 10.10.11.10/24
#1GB Ceph Public

auto enp33s0f1
iface enp33s0f1 inet static
address 10.10.12.10/24
#1GB CoroSync

auto enp1s0
iface enp1s0 inet static
address 10.10.10.10/24
mtu 9000
#10GB Ceph Cluster

auto vmbr0
iface vmbr0 inet static
address public_ip/27
gateway public_gwip
bridge-ports enp41s0
bridge-stp off
bridge-fd 0
#UPLINK Management

auto vmbr10
iface vmbr10 inet manual
bridge-ports none
bridge-stp off
bridge-fd 0
#Private Network 192.168.175.0

auto vmbr11
iface vmbr11 inet manual
bridge-ports none
bridge-stp off
bridge-fd 0
#Private Network 172.16.101.0

auto vmbr12
iface vmbr12 inet manual
bridge-ports none
bridge-stp off
bridge-fd 0
#Private Network 172.16.102.0

i have read a couple of pages but couldnt find the right answer. i think i have forgotten something but i couldnt figure out myself.
any idea how to solve the issue ?
 
2 thoughts on it

1. it cant be the same subnet on all 4 nodes. how the cluster should know the VM moved to another node
2. the connection between the nodes are done with the external interface ! because its public IP, it cant resolve it to any internal IP

are that thoughts correct ? and how to solve it ?
 
if your vmbr doesn't have any physical ports (and no forwarding/masquerading on the firewall level) any guests connected to it will only be able to talk to other guests on that node, and not on other cluster nodes.
 
auto vmbr0
iface vmbr0 inet static
address public_ip/27
gateway public_gwip
bridge-ports enp41s0
bridge-stp off
bridge-fd 0
#UPLINK Management

the vmbr0 is the bridge to the physically NIC
but vmbr12, which is the Subnet which needs to talk to other subnets on other nodes, doesnt have connection to physically NIC.

now i understand why many configs are seen with only one Network and multiple VLANs. am i correct ?

can i connect it to the NIC for corosync ? or for the sake of connection are only connection with an external NIC allowed ?
 
any physical NIC that can talk to other nodes would be okay - you can checkout the new VXLAN/SDN feature if you want have more control and flexibility. keep in mind that sharing the NIC with corosync might not be a good idea since all that inter-node traffic would then go over it which could negatively affect corosync.
 
any physical NIC that can talk to other nodes would be okay - you can checkout the new VXLAN/SDN feature if you want have more control and flexibility. keep in mind that sharing the NIC with corosync might not be a good idea since all that inter-node traffic would then go over it which could negatively affect corosync.

does it work like i have in mind ?
an SDN over the 4 Nodes with the public_ip, translated to any Private_ip, opnsense will be connected to the private_IP, per routing (income traffic public_IP_1 forward to proxmox1, and so on)

and second sdn for traffic from vm to vm over nodes the same, but i have only 1 NIC which is connected to all vNICs and public_ips
(the advantages of HA is gone - if i cant move the VMs around like i want/need)

i have now everything on 1 node. the other 3 are doing nothing. its not what i had in mind.
 

Attachments

  • proxmox.JPG
    proxmox.JPG
    95.9 KB · Views: 23
any physical NIC that can talk to other nodes would be okay - you can checkout the new VXLAN/SDN feature if you want have more control and flexibility. keep in mind that sharing the NIC with corosync might not be a good idea since all that inter-node traffic would then go over it which could negatively affect corosync.
it worked, like a charm.
the HA of one VM broke. need to test it properly if it was just a one time thing.
do you think i can do the same over the 4 Public IPs i have. put them all in a xvlan and be able to address all 4 of them like: if traffic comes on Public_IP 1 .... hmm, how does it continous ?? behind its a opnsense which will manage all traffic and firewall. how can i do it with the 4 Public_ips ?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!