prxoxmox ve 3.2 drbd cluster plus openvswitch Here is what i have:
a fully working 2 NIC cluster, basically following the wiki : http://pve.proxmox.com/wiki/DRBD
What i like to have: to add open vswitch support.
Tried to add open vswitch via the GUI for private VM connectivity. Only one VM should act as FW/Router with Internet access for the whole cluster, using "vmbr0".
But this failed. afterwards the cluster was not usable anymore.
Setup before creating OVS Bridge:
/etc/network/interfaces
# primary interface
auto eth0
iface eth0 inet static
address 148.XXX.XXX.XXX
netmask 255.255.255.224
gateway 148.XXX.XXX.XXX
broadcast 148.XXX.XXX.XXX
up route add -net 148.XXX.XXX.XXX netmask 255.255.255.224 gw 148.XXX.XXX.XXX eth0
# bridge for routed communication (Hetzner)
# external connection router/firewall-vm, classic Linux Bridge
auto vmbr0
iface vmbr0 inet static
address 148.XXX.XXX.XXX
netmask 255.255.255.248
bridge_ports none
bridge_stp off
bridge_fd 0
# internal connection drbd / cluster
auto eth1
iface eth1 inet static
address 172.24.10.1
netmask 255.255.255.0
Network after adding OVS Bridge with private IP via GUI and a reboot (eth0 and vmbr0 were untouched):
/etc/network/interfaces
...
allow-vmbr1 eth1
iface eth1 inet static
address 172.24.10.1
netmask 255.255.255.0
ovs_type OVSPort
ovs_bridge vmbr1
auto vmbr1
iface vmbr1 inet static
address 192.168.20.1
netmask 255.255.255.0
ovs_type OVSBridge
ovs_ports eth1
Now, node 1 can not ping node 2 anymore, the cluster (connection via 172.24.10.1 and 172.24.10.2) is down.
However, pinging the other OVS Bridge IP (192.168.20.2) is possible.
AFAIK, something is wrong here:
after this is delared "iface eth1 inet static" you can not use "ovs_ports eth1" in "vmbr1" anymore.
Maybe it would work to change "iface eth1 inet static" to "iface int1 inet static" and add "ovs_ports eth1 int1" to "vmbr1".
Is there an howto for adding open vswitch support to a 2-Node drbd cluster? That would be much appreciated.
a fully working 2 NIC cluster, basically following the wiki : http://pve.proxmox.com/wiki/DRBD
What i like to have: to add open vswitch support.
Tried to add open vswitch via the GUI for private VM connectivity. Only one VM should act as FW/Router with Internet access for the whole cluster, using "vmbr0".
But this failed. afterwards the cluster was not usable anymore.
Setup before creating OVS Bridge:
/etc/network/interfaces
# primary interface
auto eth0
iface eth0 inet static
address 148.XXX.XXX.XXX
netmask 255.255.255.224
gateway 148.XXX.XXX.XXX
broadcast 148.XXX.XXX.XXX
up route add -net 148.XXX.XXX.XXX netmask 255.255.255.224 gw 148.XXX.XXX.XXX eth0
# bridge for routed communication (Hetzner)
# external connection router/firewall-vm, classic Linux Bridge
auto vmbr0
iface vmbr0 inet static
address 148.XXX.XXX.XXX
netmask 255.255.255.248
bridge_ports none
bridge_stp off
bridge_fd 0
# internal connection drbd / cluster
auto eth1
iface eth1 inet static
address 172.24.10.1
netmask 255.255.255.0
Network after adding OVS Bridge with private IP via GUI and a reboot (eth0 and vmbr0 were untouched):
/etc/network/interfaces
...
allow-vmbr1 eth1
iface eth1 inet static
address 172.24.10.1
netmask 255.255.255.0
ovs_type OVSPort
ovs_bridge vmbr1
auto vmbr1
iface vmbr1 inet static
address 192.168.20.1
netmask 255.255.255.0
ovs_type OVSBridge
ovs_ports eth1
Now, node 1 can not ping node 2 anymore, the cluster (connection via 172.24.10.1 and 172.24.10.2) is down.
However, pinging the other OVS Bridge IP (192.168.20.2) is possible.
AFAIK, something is wrong here:
after this is delared "iface eth1 inet static" you can not use "ovs_ports eth1" in "vmbr1" anymore.
Maybe it would work to change "iface eth1 inet static" to "iface int1 inet static" and add "ovs_ports eth1 int1" to "vmbr1".
Is there an howto for adding open vswitch support to a 2-Node drbd cluster? That would be much appreciated.