How to allow communication between VMs on different cluster nodes

CyrilHazotte

New Member
Jun 14, 2022
2
0
1
Dear all, Proxmox experts,

Despite a few attempts, playing with bridge configs, VLANs, firewall rules and so on, our Proxmox experiments using a 5-node cluster setup of several machines ended up with no solution to allow communication between any VM (Linux and Windows, pfSense) on different cluster nodes.

In the Proxmox VE admin guide on section 5.7.2 is written:
Separate Cluster Network
When creating a cluster without any parameters, the corosync cluster network is generally shared with the
web interface and the VMs’ network. Depending on your setup, even storage traffic may get sent over the
same network. It’s recommended to change that, as corosync is a time-critical, real-time application.

My questions are:
- Is that separate cluster network required to achieve any communication between VMs on different nodes? The minimum would be a ping through ICMP protocol.
- How do we achieve a global network setup with a per node setup? Most of the magic should happen in the /etc/network/interfaces file but each node has its own.

Thank you in advance!
Cyril
 
My questions are:
- Is that separate cluster network required to achieve any communication between VMs on different nodes? The minimum would be a ping through ICMP protoco
no, that's meant only for corosync traffic, but the principle is similar

- How do we achieve a global network setup with a per node setup? Most of the magic should happen in the /etc/network/interfaces file but each node has its own.
if you have an available physical port then you could cross connect your nodes and create a bridge on that new interface and assign it to the VMs, that should enable them to communicate between nodes
 
if you have an available physical port then you could cross connect your nodes and create a bridge on that new interface and assign it to the VMs, that should enable them to communicate between nodes
Thank you for your response.

Each node has a built-in 'Network Device' named enp1s0.
In addition to that, the default 'Linux Bridge' vmbr0 entry for the same port has for IPv4/CIDR 172.16.1.23x/24, 1<x<5. The gateway is 172.16.1.254.
We tried to change the default bridge for each VM so that it used IPs in the range 192.168.10.x/24 , 1<x<5. However it didn't help to make VMs reachable. All nodes can ping each other but not from built-in VMs with manually set IPs.

Any guidance on this?

Thank you.
Cyril
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!