At Geco-it, we use the SDS Linstor solution.
To connect our hypervisors without investing in switches we use a full mesh network
To use Linstor storage in VMs (container volumes), we need access to the Linstor satellite network.
So we need a VM Storage bridge...
To setup full mesh network, you must follow:
Add this in your /etc/network/interfaces (on each proxmox node)
Example for node1
Now, if you create a VM with access to the storage bridge, you should be able to ping IPs 10.20.45.x
To connect our hypervisors without investing in switches we use a full mesh network
To use Linstor storage in VMs (container volumes), we need access to the Linstor satellite network.
So we need a VM Storage bridge...
To setup full mesh network, you must follow:
- Full Mesh Network for Ceph Server
- [Full mesh (routed setup) + EVPN] it is feasible even by using SDN!
Infrastructure
Code:
+------------------+ +------------------+ +------------------+
| Node 1 | | Node 2 | | Node 3 |
| | | | | |
| | | | | |
|+-----+ | | +-----+| | +-----+|
|| VM1 | | | | VMx || | | VMy ||
|| | <---------|----|-------> | ||<---|--------> | ||
|+-----+ | | +-----+| | +-----+|
|+----------------+| |+----------------+| |+----------------+|
|| VM SDN Bridge || || VM SDN Bridge || || VM SDN Bridge ||
|| "storage" || || "storage" || || "storage" ||
|| || || || || ||
|+----------------+| |+----------------+| |+----------------+|
|| eno1 || eno2 || || eno1 || eno2 || || eno1 || eno2 ||
+------------------+ +------------------+ +------------------+
^ ^ v ^ v ^
| +--------------- +--------------- |
| |
-----------------------------------------------------------+
.
.
Node Name | Loopback IP | OpenFabric Netword ID | NIC Name 1 | NIC Name 2 | NIC's MTU | VM SDN Bridge | VM SDN Bridge IP |
---|---|---|---|---|---|---|---|
node1 | 10.255.255.111 | 49.0001.1111.1111.1111.00 | eno1 | eno2 | 9000 | storage | 10.20.45.111/24 |
node2 | 10.225.255.112 | 49.0001.2222.2222.2222.00 | eno1 | eno2 | 9000 | storage | 10.20.45.112/24 |
node3 | 10.255.255.113 | 49.0001.3333.3333.3333.00 | eno1 | eno2 | 9000 | storage | 10.20.45.113/24 |
Configuration
Add this in your /etc/network/interfaces (on each proxmox node)
Example for node1
Code:
...
##
# Storage Network (Openfabric mesh)
##
auto lo:0
iface lo:0 inet loopback
address 10.255.255.111/32
post-up echo 1 > /proc/sys/net/ipv4/ip_forward
#EVPN Routing
auto eno1
iface eno1 inet manual
mtu 9000
auto eno2
iface eno2 inet manual
mtu 9000
auto vxlan_storage
iface vxlan_storage
vxlan-id 101
vxlan-local-tunnelip 10.255.255.111
bridge-learning off
mtu 8950
iface storage
address 10.20.45.111/24
bridge_ports vxlan_storage
post-up /usr/bin/systemctl restart frr.service
...
SDN Configuration
- /etc/pve/sdn/controllers.cfg
Code:
...
evpn: vmbr1evpn
asn 65000
peers 10.255.255.111,10.255.255.112,10.255.255.113
...
- /etc/pve/sdn/zones.cfg
Code:
...
simple: vmbr1
ipam pve
mtu 8950
...
- /etc/pve/sdn/vnets.cfg
Code:
...
vnet: storage
zone vmbr1
alias DRBD Storage
...
- /etc/pve/sdn/subnets.cfg
Code:
...
subnet: vmbr1-10.20.45.0-24
vnet storage
...
Apply settings
- SDN
Bash:
pvesh set /cluster/sdn
- Full network settings (on each node)
Bash:
systemctl restart networking.service
Now, if you create a VM with access to the storage bridge, you should be able to ping IPs 10.20.45.x