Cluster network with storage, migration network

Pravednik

Active Member
Sep 18, 2018
17
0
41
40
Hi All,

I`m going to migrate to Proxmox and I have a question about cluster network.

I have servers with 4 NICs.
2 x 1Gbe for uplinks connected to separate switches in active-standby mode, which connected to Core router and all VMs obtain real IP.
2 x 10Gbe connected to separate switches in active-standby mode which connected to NAS storage's.
All nodes has shared (via NFS) storage for VM`s. Also Live Motion, Storage motion, Backups traffic will goes through 10Gbe adapters.

As I understood from admin guide it is strongly recommend to use separate network for cluster traffic.

The network should not be used heavily by other members, ideally corosync runs on its own network. never share it with network where storage communicates too.


But in my case 1Gbe network under the risk, because some clients could start DDoS or sending a lot of traffic until IPS block them.
I don`t want to loose redundancy and unplug one NIC for cluster traffic.

What is the best choice for cluster traffic? Leave it on 10Gbe which is more securely and faster or put it on 1Gbe networks? Average traffic on 10Gbe NIC is 2Gbit\s

Thanks in advance for answers.
 
After few days of reading I decide to leave corosync network in 10G net.
But I have problem with VLANs.

As I mentioned above I have 2 10G nic without VLAN and 2 1G nics with VLANs configured on the switch.
I create 2 bonds. One with regular Linux bond1 for 10G network, then create vmbr0 with internal network it works.
Then I create OVS bridge vmbr1, OVS bond0 for 1G cards and 2 OVS IntPort with VLAN tag and IP. I couldn`t ping those addresses.

/ect/network/interfaces:
Code:
auto lo
iface lo inet loopback

#1Gbe cards
iface enp0s25 inet manual
iface enp1s0 inet manual

#10Gbe cards
iface enp3s0f0 inet manual
iface enp3s0f1 inet manual

auto bond1
iface bond1 inet manual
        slaves enp3s0f0 enp3s0f1
        bond_miimon 100
        bond_mode active-backup
#Storage traffic
auto vmbr0
iface vmbr0 inet static
        address  10.32.34.11
        netmask  255.255.252.0
        bridge_ports bond1
        bridge_stp off
        bridge_fd 0
        bridge_maxwait 60

allow-vmbr1 bond0
iface bond0 inet manual
        ovs_bonds enp0s25 enp1s0
        ovs_type OVSBond
        ovs_bridge vmbr1
        ovs_options bond_mode=active-backup

auto vmbr1
iface vmbr1 inet manual
        ovs_type OVSBridge
        ovs_ports bond0 mgmt wan

#PVE Mgmt
allow-vmbr1 mgmt
iface mgmt inet static
        address  10.100.102.11
        netmask  255.255.252.0
        ovs_type OVSIntPort
        ovs_bridge vmbr1
        ovs_options tag=33

#External network
allow-vmbr1 wan
iface wan inet static
        address  external_ip
        netmask  255.255.252.0
        gateway  external_gateway
        ovs_type OVSIntPort
        ovs_bridge vmbr1
        ovs_options tag=22

I also tried to use regular Linux bond\bridge and VLANs and also no luck.
I can`t understand where I`m wrong.
Could you review configuration and help? I`m new in PVE\KVM so sorry for newbie questions.
 
After few days of reading I decide to leave corosync network in 10G net.
But I have problem with VLANs.

As I mentioned above I have 2 10G nic without VLAN and 2 1G nics with VLANs configured on the switch.
I create 2 bonds. One with regular Linux bond1 for 10G network, then create vmbr0 with internal network it works.
Then I create OVS bridge vmbr1, OVS bond0 for 1G cards and 2 OVS IntPort with VLAN tag and IP. I couldn`t ping those addresses.

/ect/network/interfaces:
Code:
auto lo
iface lo inet loopback

#1Gbe cards
iface enp0s25 inet manual
iface enp1s0 inet manual

#10Gbe cards
iface enp3s0f0 inet manual
iface enp3s0f1 inet manual

auto bond1
iface bond1 inet manual
        slaves enp3s0f0 enp3s0f1
        bond_miimon 100
        bond_mode active-backup
#Storage traffic
auto vmbr0
iface vmbr0 inet static
        address  10.32.34.11
        netmask  255.255.252.0
        bridge_ports bond1
        bridge_stp off
        bridge_fd 0
        bridge_maxwait 60

allow-vmbr1 bond0
iface bond0 inet manual
        ovs_bonds enp0s25 enp1s0
        ovs_type OVSBond
        ovs_bridge vmbr1
        ovs_options bond_mode=active-backup

auto vmbr1
iface vmbr1 inet manual
        ovs_type OVSBridge
        ovs_ports bond0 mgmt wan

#PVE Mgmt
allow-vmbr1 mgmt
iface mgmt inet static
        address  10.100.102.11
        netmask  255.255.252.0
        ovs_type OVSIntPort
        ovs_bridge vmbr1
        ovs_options tag=33

#External network
allow-vmbr1 wan
iface wan inet static
        address  external_ip
        netmask  255.255.252.0
        gateway  external_gateway
        ovs_type OVSIntPort
        ovs_bridge vmbr1
        ovs_options tag=22

I also tried to use regular Linux bond\bridge and VLANs and also no luck.
I can`t understand where I`m wrong.
Could you review configuration and help? I`m new in PVE\KVM so sorry for newbie questions.
Hi

Not sure but i think you should folow this example for ovs and vlan:
https://forum.proxmox.com/threads/vlan-tag.38051/
 
  • Like
Reactions: Pravednik

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!