10Gbe ring configuration

kkjensen

New Member
Jan 30, 2020
18
0
1
46
Hi there everyone

I'm still pretty new to proxmox and debian but do come from a background where I've used a lot of cisco switches and dabbled in linux.

I have a 4 node hypervirtualized cluster running 3 separate networks for corosync and ceph with the 'outside' access happening on bridged ports. Seems to work pretty slick.

I recently acquired some 2 port 10Gbe NICs with the intention of setting them up in a ring for the ceph network.

Is a network ring possible? While it's not as fast as a mesh setup (since I don't have 3 available ports on each host) nor as robust as bonding both nics to two ports on a switch (Once I'm in production I'll only have one available SFP+ trunk port available to tie into the rest of the access network.

My questions are two fold:

1. Can NIC ports be set up as a virtual internal switch so data can pass through one node to get to another? I figured a 10Gbe ring would still be significantly faster than a 1Gbe cat6 connection to a cisco 2960G I had laying around.
2. Can a "vlan aware" port be set to accept/pass-through any vlan or does each one have to be explicitly defined as a bridge?
 
>>1. Can NIC ports be set up as a virtual internal switch so data can pass through one node to get to another? I figured a 10Gbe ring would still be >>significantly faster than a 1Gbe cat6 connection to a cisco 2960G I had laying around.

you can try to create a ring, with a vmbr on each proxmox node with the 2 nics plugged. but you need to enable stanning tree to avoid loop (bridge_stp on). (I really can't recommand this in production, but you could try. Expect to have some packet loss when a node reboot and spanningtree converge)

>>2. Can a "vlan aware" port be set to accept/pass-through any vlan or does each one have to be explicitly defined as a bridge?
yes, by default it's allowing all vlans (can be tuned with "bridge-vids 1-10,40,60,100-200" for example. default is 2-4094, with default vlan 1)
 
Thanks for your reply and sorry for the late response. After digging more (and screwing up a few things while trying to move the cluster's ip subnets to something that is compatible with the office, should I ever need to move the machines there) I've restarted the project with fresh proxmox installs on all 4 machines. One of the 10Gbe adapters was dead on arrival so I'm in a holding pattern waiting for a replacement.

In the meantime I have come to the conclusion (based on some other messages and posts I've seen) that setting up the cluster with open virtual switch running all the bonds and rings.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!