Hi
What would be the best Proxmox nodes network configuration to connect NFS VM datastore as a shared storage?
As far as I understand Proxmox recommends to use bridges and do not assign IP directly to interfaces.
I configured a switch with LACP and it passes two VLANs (10 and 20) to Proxmox hosts. One is for NFS, the other one is for VM migration.
So, it will be just Proxmox node to storage communication for NFS traffic and VM migration traffic.
There will not be VMs that will pass any traffic through this bond interface.
I tried two configurations:
1. Bond
auto bond2
iface bond2 inet manual
bond-slaves ens1f0 ens2f1
bond-miimon 100
bond-mode 802.3ad
bond-xmit-hash-policy layer2+3
#Storage. Port channel 10 to storage sw
auto bond2.10
iface bond2.10 inet static
address 192.168.100.1/24
auto bond2.20
iface bond2.20 inet static
address 192.168.1.1/24
#Migration
2. Bridge
auto bond2
iface bond2 inet manual
bond-slaves ens1f0 ens2f1
bond-miimon 100
bond-mode 802.3ad
bond-xmit-hash-policy layer2+3
#Storage. Port channel 10 to storage sw
auto vmbr2_10
iface vmbr2_10 inet static
address 192.168.100.1/24
bridge-ports bond2.10
bridge-stp off
bridge-fd 0
#Storage switch
auto vmbr2_20
iface vmbr2_20 inet static
address 192.168.1.1/24
bridge-ports bond2.20
bridge-stp off
bridge-fd 0
#Migration switch
In both configurations I can ping NFS storage from Proxmox hosts (haven't tried to actually add it) and 'migration' IPs.
Should I use bonded interface or a bridge? Any benefits of using a bridge? Does creation additional layer (in this case - vmbr) adds extra latency?
Thank you.
What would be the best Proxmox nodes network configuration to connect NFS VM datastore as a shared storage?
As far as I understand Proxmox recommends to use bridges and do not assign IP directly to interfaces.
I configured a switch with LACP and it passes two VLANs (10 and 20) to Proxmox hosts. One is for NFS, the other one is for VM migration.
So, it will be just Proxmox node to storage communication for NFS traffic and VM migration traffic.
There will not be VMs that will pass any traffic through this bond interface.
I tried two configurations:
1. Bond
auto bond2
iface bond2 inet manual
bond-slaves ens1f0 ens2f1
bond-miimon 100
bond-mode 802.3ad
bond-xmit-hash-policy layer2+3
#Storage. Port channel 10 to storage sw
auto bond2.10
iface bond2.10 inet static
address 192.168.100.1/24
auto bond2.20
iface bond2.20 inet static
address 192.168.1.1/24
#Migration
2. Bridge
auto bond2
iface bond2 inet manual
bond-slaves ens1f0 ens2f1
bond-miimon 100
bond-mode 802.3ad
bond-xmit-hash-policy layer2+3
#Storage. Port channel 10 to storage sw
auto vmbr2_10
iface vmbr2_10 inet static
address 192.168.100.1/24
bridge-ports bond2.10
bridge-stp off
bridge-fd 0
#Storage switch
auto vmbr2_20
iface vmbr2_20 inet static
address 192.168.1.1/24
bridge-ports bond2.20
bridge-stp off
bridge-fd 0
#Migration switch
In both configurations I can ping NFS storage from Proxmox hosts (haven't tried to actually add it) and 'migration' IPs.
Should I use bonded interface or a bridge? Any benefits of using a bridge? Does creation additional layer (in this case - vmbr) adds extra latency?
Thank you.