utilising direct connection for storage replication

Mar 26, 2019
12
0
6
47
Hi all,

I have 2 identical nodes on the latest 5.3 version.
I'm using storage replication based on a shared zfs pool.
IPv4 only network, 2 VLANs but only one is involved in Proxmox routing.

Each node has 4 Ethernet ports so I've decided to create 2 bonds:
1. Going to the nearest switch they both share and where some other LAN devices are connected to.
2. Interconnecting nodes with a pair of cross over cables.

My /etc/network/interfaces as below:

auto lo
iface lo inet loopback

iface eno1 inet manual
iface eno2 inet manual
iface ens1f0 inet manual
iface ens1f1 inet manual

auto bond0
iface bond0 inet manual
slaves eno1 eno2
bond_miimon 100
bond_mode 802.3ad
bond_xmit_hash_policy layer3+4

auto bond1
iface bond1 inet static
slaves ens1f0 ens1f1
address 192.168.100.1
netmask 255.255.255.252
bond_miimon 100
bond_mode balance-rr
bond_xmit_hash_policy layer3+4

auto vmbr0
iface vmbr0 inet static
address 192.168.8.107
netmask 255.255.252.0
gateway 192.168.8.1
bridge_ports bond0
bridge_stp off
bridge_fd 0

Both bonds are working fine, i.e. pass traffic.

Question: How do I force all storage replication traffic through bond1?

In my scenario there is no need to involve a switch or any other device in this traffic.

I've tried adding static entries to /etc/hosts on both nodes:

node1:

127.0.0.1 localhost.localdomain localhost
192.168.8.107 node1.matrixscience.co.uk node1 pvelocalhost
192.168.100.2 node2.matrixscience.co.uk node2

node2:

127.0.0.1 localhost.localdomain localhost
192.168.8.107 node2.matrixscience.co.uk node2 pvelocalhost
192.168.100.2 node1.matrixscience.co.uk node1

bit still no traffic passes through bond1 / 192.168.200.0/30

Is it possible and advisable to do?

Regards,
Adam
 
I've edited:

/etc/corosync/corosync.conf
/etc/pve/corosync.conf

and replaced bond0 (192.168.8.x) addresses with bond1 (192.168.100.x) followed by reboot of both nodes.

This doesn't seem to be enough.

/etc/pve/.members is still pointing to the old (bond0) addresses and it looks it's not supposed to by manually edited.

Quoration happens through both bonds but I really want to force it through bond1.

Storage replication still attempts to use bond0 and fails if the link is down.

Do I have no choice but to destroy the cluster and recreate it based on bond1?
 
Last edited:
I've edited:

/etc/corosync/corosync.conf
/etc/pve/corosync.conf

and replaced bond0 (192.168.8.x) addresses with bond1 (192.168.100.x) followed by reboot of both nodes.

This doesn't seem to be enough.

/etc/pve/.members is still pointing to the old (bond0) addresses and it looks it's not supposed to by manually edited.

Quoration happens through both bonds but I really want to force it through bond1.

Storage replication still attempts to use bond0 and fails if the link is down.

Do I no choice but to destroy the cluster and recreate it based on bond1?

Hi Adam,

I' m facing the same challenge, did you manage to get this to work ?

Alain
 
Hi Alain,

Yes, I ended up recreating cluster with /etc/hosts on both nodes as below:

127.0.0.1 localhost.localdomain localhost
192.168.100.1 node1.matrixscience.co.uk node1
192.168.100.2 node2.matrixscience.co.uk node2

My /etc/network/interfaces haven't changes and look exactly as in my original post above.

No need to edit:

/etc/corosync/corosync.conf
/etc/pve/corosync.conf
/etc/pve/.members

as these will be generated automatically on cluster creation.

Everything is working as desired now:
- all CT/VM traffic is served through bond0
- quoration and all ZFS replication happens through bond1

Hope this helps, good luck!

Cheers,
Adam
 
Done that and "pvecm status" now shows me the cluster is setup on the storage LAN IP addresses
but when I monitor the interfaces (vmnics as I test this on a VMware server) I don' t see load from a
ping test (5K packets) between pve1 to pve2 on it' s LAN IP address.
 
Last edited:
I've never tried it on VMware since Proxmox is meant to serve similar purposes and act as a host itself.
But I understand you need to experiment with the Proxmox stack somewhere and you might not want to invest time and money in setting up hardware.
You should be able to create a secondary isolated connection between 2 VMs using port group and virtual switch.
But then setting up and troubleshooting VMware comes to play.

If you have 2 old PCs and don't want to mess up your VMware host too much you can buy two PCIE Ethernet cards to simulate this scenario.
So that each PC has 2 physical NICs.
You can probably get these from eBay or similar for no more than several bucks.
You can also use laptops and USB Ethernet dongles which are equally cheap.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!