connect VMs to private Ceph_public network

lveltmaat

New Member
Jan 25, 2023
3
0
1
Hi guys.

we have a three node cluster running proxmox and Ceph as RBD store.
The three nodes are connected to each other in a Private network configuration which is only meant to be used by ceph. Because we want to be able to mount cephfs volumes in some VMs we need to be able to let VMs acces the ceph public network. Until now we haven't found a working network config.

1697536564867.png
We have used the following example to create a working config for the physical nodes but hit a wall when trying to connect VMs to the ceph monitors.
https://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server#Routed_Setup_.28Simple.29

/etc/network/interfaces for proxmox01:

Code:
auto lo
iface lo inet loopback

iface enp59s0f0 inet manual

iface enp59s0f1 inet manual

auto bond0
iface bond0 inet manual
        bond-slaves enp59s0f0 enp59s0f1
        bond-miimon 100
        bond-mode 802.3ad
        bond-lacp-rate fast
#MGT / Data Switch Ports

auto vmbr0
iface vmbr0 inet manual
        bridge-ports bond0
        bridge-stp off
        bridge-fd 0
        bridge-vlan-aware yes
        bridge-vids 2-4094
#VM Connectivity Bridge

auto vmbr0.200
iface vmbr0.200 inet static
        address 192.168.7.150
        netmask 255.255.248.0
        gateway 192.168.0.1
#Proxmox Management Interface

iface eno1 inet manual

iface eno2 inet manual

auto bond1
iface bond1 inet manual
        bond-slaves eno1 eno2
        bond-miimon 100
        bond-mode 802.3ad
        bond-lacp-rate fast
#SAN Interconnect proxmox02

auto bond1.100
iface bond1.100 inet static
        address 10.9.100.100
        netmask 255.255.252.0
        up ip route add 10.9.100.101/32 dev bond1.100
        down ip route del 10.9.100.101/32
#SAN Public proxmox02

auto bond1.104
iface bond1.104 inet static
        address 10.9.104.100
        netmask 255.255.252.0
        up ip route add 10.9.104.101/32 dev bond1.104
        down ip route del 10.9.104.101/32
#SAN Cluster proxmox02

iface eno3 inet manual

iface eno4 inet manual

auto bond2
iface bond2 inet manual
        bond-slaves eno3 eno4
        bond-miimon 100
        bond-mode 802.3ad
        bond-lacp-rate fast
#SAN Interconnect proxmox03

auto bond2.100
iface bond2.100 inet static
        address 10.9.100.100
        netmask 255.255.252.0
        up ip route add 10.9.100.102/32 dev bond2.100
        down ip route del 10.9.100.102/32
#SAN Public proxmox03

auto bond2.104
iface bond2.104 inet static
        address 10.9.104.100
        netmask 255.255.252.0
        up ip route add 10.9.104.102/32 dev bond2.104
        down ip route del 10.9.104.102/32
#SAN Cluster proxmox03

auto vmbr1
iface vmbr1 inet manual
        bridge-ports bond1 bond2
        bridge-stp off
        bridge-fd 0
        bridge-vlan-aware yes
        bridge-vids 100
#Proxmox Internal SAN Bridge

auto vmbr1.100
iface vmbr1.100 inet static
        address 10.9.100.100
        netmask 255.255.252.0
#Proxmox Internal SAN Public Bridge

the routes configured for proxmox01:

Destination Gateway Genmask Flags Metric Ref Use Iface default 192.168.0.1 0.0.0.0 UG 0 0 0 vmbr0.200 192.168.0.0 0.0.0.0 255.255.248.0 U 0 0 0 vmbr0.200 10.9.100.101 0.0.0.0 255.255.255.255 UH 0 0 0 bond1.100 10.9.100.102 0.0.0.0 255.255.255.255 UH 0 0 0 bond2.100 10.9.104.101 0.0.0.0 255.255.255.255 UH 0 0 0 bond1.104 10.9.104.102 0.0.0.0 255.255.255.255 UH 0 0 0 bond2.104 link-local 0.0.0.0 255.255.0.0 U 0 0 0 idrac

Thanks in advance.
 
Last edited:
Which of those networks is the Ceph public network? bond1.100/ICODE] and [ICODE]bond2.100?
Do your guests that need access to the Ceph public network use vmbr1 or vmbr1.100?
 
Which of those networks is the Ceph public network? bond1.100/ICODE] and [ICODE]bond2.100?
Do your guests that need access to the Ceph public network use vmbr1 or vmbr1.100?
The guests need to be able to use bond1.100 and bond2.100 to access the monitors on the other nodes. I have tried to do this by creating the vmbr1 interface. I don't believe I can add any virtual network device that uses the vmbr1.100 interface directly. So I assign vmbr1 and let the guest use vlan tag 100. This doesn't allow the VM to access the monitors in the ceph public network however. I think the vmbr1.100 interface is obsolete for this reason.

1697550288006.png
 
Last edited:
  • Like
Reactions: pve51888
We tried a few different configs and found out that bridging bonds is not that simple. Ultimately we setup a bond for the public network, which is configured in broadcast mode. The ceph_cluster network is using a single link now for each node, simplifying the design.

Thanks for the time though.
 
这么晚才回复很抱歉。

通过简单的路由设置,弥合联系并不那么容易。
是否可以切换到 OVS RSTP 设置 [0]?

这样它就已经得到处理,并且您可以获得额外的冗余。


[0] https://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server#RSTP_Loop_Setup
Excuse me, dear mira, do you have any ideas for my question?
The following is the website address for the problem:

https://forum.proxmox.com/threads/how-to-set-up-a-redundant-ceph-network??.135694/#post-600538
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!