Hi forum,
We want to migrate our running pve-cluster to a newer hardware and thereby
improve our cluster network setup.
Current status of the new PVE is: All 3 nodes have PVE 8.3.0 installed and most
networks are configured. Cluster creation is pending, because we still try to optimize
the cluster net layout.
Finally we want to end up with a 3 node hyperconverged meshed ceph cluster using
the following dedicated networks:
bridged public network, corosync network, ceph meshed cluster network configured with
FRR-OpenFabric in a failover config:
## /etc/interfaces
>>>
auto lo
iface lo inet6 loopback
# if FRR-OpenFabric - ceph meshed cluster net
...
# 8: ens3f0
auto ens3f0
iface ens3f0 inet6 static
mtu 9000
# lower nic fiber port1 r - cluster net ceph meshed if
# 9: ens3f1
auto ens3f1
iface ens3f1 inet6 static
mtu 9000
# lower nic fiber port2 l - cluster net ceph meshed if
...
<<<
## /etc/frr/frr.conf
>>>
frr defaults traditional
hostname < name node 3 >
log syslog warning
#ip forwarding
ipv6 forwarding
service integrated-vtysh-config
!
interface lo
ipv6 address fdbe:8cf3:7199::3/128
ipv6 router openfabric 1
openfabric passive
!
interface ens3f0
ipv6 router openfabric 1
openfabric csnp-interval 2
openfabric hello-interval 1
openfabric hello-multiplier 2
!
interface ens3f1
ipv6 router openfabric 1
openfabric csnp-interval 2
openfabric hello-interval 1
openfabric hello-multiplier 2
!
line vty
!
router openfabric 1
net 49.0001.3333.3333.3333.00
lsp-gen-interval 1
max-lsp-lifetime 600
lsp-refresh-interval 180
>>>
Ceph cluster net is working fine so far.
Eventually we want to add a dedicated migration network and have the folloeing questions:
1. Does it make sense to use a FRR-OpenFabric configuration in a dedicated meshed migration network too?
2. Since we already use the loopback interface in our ceph cluster network configuration, can we use a static IPv6
network with an IP for each node to get a similar failover configuration?
3. Does anyone have already experiences with such a configuration for a dedicated migration network?
4. Or is there a better way to implement a dedicated migration network?
Thank you in advance and best regards!
We want to migrate our running pve-cluster to a newer hardware and thereby
improve our cluster network setup.
Current status of the new PVE is: All 3 nodes have PVE 8.3.0 installed and most
networks are configured. Cluster creation is pending, because we still try to optimize
the cluster net layout.
Finally we want to end up with a 3 node hyperconverged meshed ceph cluster using
the following dedicated networks:
bridged public network, corosync network, ceph meshed cluster network configured with
FRR-OpenFabric in a failover config:
## /etc/interfaces
>>>
auto lo
iface lo inet6 loopback
# if FRR-OpenFabric - ceph meshed cluster net
...
# 8: ens3f0
auto ens3f0
iface ens3f0 inet6 static
mtu 9000
# lower nic fiber port1 r - cluster net ceph meshed if
# 9: ens3f1
auto ens3f1
iface ens3f1 inet6 static
mtu 9000
# lower nic fiber port2 l - cluster net ceph meshed if
...
<<<
## /etc/frr/frr.conf
>>>
frr defaults traditional
hostname < name node 3 >
log syslog warning
#ip forwarding
ipv6 forwarding
service integrated-vtysh-config
!
interface lo
ipv6 address fdbe:8cf3:7199::3/128
ipv6 router openfabric 1
openfabric passive
!
interface ens3f0
ipv6 router openfabric 1
openfabric csnp-interval 2
openfabric hello-interval 1
openfabric hello-multiplier 2
!
interface ens3f1
ipv6 router openfabric 1
openfabric csnp-interval 2
openfabric hello-interval 1
openfabric hello-multiplier 2
!
line vty
!
router openfabric 1
net 49.0001.3333.3333.3333.00
lsp-gen-interval 1
max-lsp-lifetime 600
lsp-refresh-interval 180
>>>
Ceph cluster net is working fine so far.
Eventually we want to add a dedicated migration network and have the folloeing questions:
1. Does it make sense to use a FRR-OpenFabric configuration in a dedicated meshed migration network too?
2. Since we already use the loopback interface in our ceph cluster network configuration, can we use a static IPv6
network with an IP for each node to get a similar failover configuration?
3. Does anyone have already experiences with such a configuration for a dedicated migration network?
4. Or is there a better way to implement a dedicated migration network?
Thank you in advance and best regards!