Hi guys.
we have a three node cluster running proxmox and Ceph as RBD store.
The three nodes are connected to each other in a Private network configuration which is only meant to be used by ceph. Because we want to be able to mount cephfs volumes in some VMs we need to be able to let VMs acces the ceph public network. Until now we haven't found a working network config.
We have used the following example to create a working config for the physical nodes but hit a wall when trying to connect VMs to the ceph monitors.
https://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server#Routed_Setup_.28Simple.29
/etc/network/interfaces for proxmox01:
the routes configured for proxmox01:
Thanks in advance.
we have a three node cluster running proxmox and Ceph as RBD store.
The three nodes are connected to each other in a Private network configuration which is only meant to be used by ceph. Because we want to be able to mount cephfs volumes in some VMs we need to be able to let VMs acces the ceph public network. Until now we haven't found a working network config.
We have used the following example to create a working config for the physical nodes but hit a wall when trying to connect VMs to the ceph monitors.
https://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server#Routed_Setup_.28Simple.29
/etc/network/interfaces for proxmox01:
Code:
auto lo
iface lo inet loopback
iface enp59s0f0 inet manual
iface enp59s0f1 inet manual
auto bond0
iface bond0 inet manual
bond-slaves enp59s0f0 enp59s0f1
bond-miimon 100
bond-mode 802.3ad
bond-lacp-rate fast
#MGT / Data Switch Ports
auto vmbr0
iface vmbr0 inet manual
bridge-ports bond0
bridge-stp off
bridge-fd 0
bridge-vlan-aware yes
bridge-vids 2-4094
#VM Connectivity Bridge
auto vmbr0.200
iface vmbr0.200 inet static
address 192.168.7.150
netmask 255.255.248.0
gateway 192.168.0.1
#Proxmox Management Interface
iface eno1 inet manual
iface eno2 inet manual
auto bond1
iface bond1 inet manual
bond-slaves eno1 eno2
bond-miimon 100
bond-mode 802.3ad
bond-lacp-rate fast
#SAN Interconnect proxmox02
auto bond1.100
iface bond1.100 inet static
address 10.9.100.100
netmask 255.255.252.0
up ip route add 10.9.100.101/32 dev bond1.100
down ip route del 10.9.100.101/32
#SAN Public proxmox02
auto bond1.104
iface bond1.104 inet static
address 10.9.104.100
netmask 255.255.252.0
up ip route add 10.9.104.101/32 dev bond1.104
down ip route del 10.9.104.101/32
#SAN Cluster proxmox02
iface eno3 inet manual
iface eno4 inet manual
auto bond2
iface bond2 inet manual
bond-slaves eno3 eno4
bond-miimon 100
bond-mode 802.3ad
bond-lacp-rate fast
#SAN Interconnect proxmox03
auto bond2.100
iface bond2.100 inet static
address 10.9.100.100
netmask 255.255.252.0
up ip route add 10.9.100.102/32 dev bond2.100
down ip route del 10.9.100.102/32
#SAN Public proxmox03
auto bond2.104
iface bond2.104 inet static
address 10.9.104.100
netmask 255.255.252.0
up ip route add 10.9.104.102/32 dev bond2.104
down ip route del 10.9.104.102/32
#SAN Cluster proxmox03
auto vmbr1
iface vmbr1 inet manual
bridge-ports bond1 bond2
bridge-stp off
bridge-fd 0
bridge-vlan-aware yes
bridge-vids 100
#Proxmox Internal SAN Bridge
auto vmbr1.100
iface vmbr1.100 inet static
address 10.9.100.100
netmask 255.255.252.0
#Proxmox Internal SAN Public Bridge
the routes configured for proxmox01:
Destination Gateway Genmask Flags Metric Ref Use Iface
default 192.168.0.1 0.0.0.0 UG 0 0 0 vmbr0.200
192.168.0.0 0.0.0.0 255.255.248.0 U 0 0 0 vmbr0.200
10.9.100.101 0.0.0.0 255.255.255.255 UH 0 0 0 bond1.100
10.9.100.102 0.0.0.0 255.255.255.255 UH 0 0 0 bond2.100
10.9.104.101 0.0.0.0 255.255.255.255 UH 0 0 0 bond1.104
10.9.104.102 0.0.0.0 255.255.255.255 UH 0 0 0 bond2.104
link-local 0.0.0.0 255.255.0.0 U 0 0 0 idrac
Thanks in advance.
Last edited: