Hello all,
I have a proxmox cluster with :
- Physical host A
- Physical host B
- Virtual host C for HA
Between Physical hosts, I have a lvm drbd shared for the VM (Primary/Primary; Sync is OK).
For each physical host, have two NIC.
This NICs is in bond0 and the bond0(-rr) is associate to vmbr0 at 192.168.10.1/24. (And 192.168.10.2/24 for the second host).
Now for the VMs, we need severals networks.
Indeed, we have :
- 192.168.10.1/24 (X-10) for the network admin (IP to physicals hosts and VMs eth)
- 192.168.20.1/24 (Y-20) and 192.168.30.1 (Z-30) for the VMs.
Each network is in differents vlans(10-20-30).
VMs (one eth per network) communicate on all network between all physical hosts but the server have only 2 NIC.
For me, the physical host configuration is :
==========
HOST A
==========
==========
HOST B
==========
#######################
The configuration doesn't work.
For the drbd cluster, I have a desync. Impact is if we migrate a VM to the second host while the desync, the VM not boot and have : "/Booting from Hard Disk ; Boot failed: not a bootable disk".
So, I shall stop the VM for stop the use ressource, and restart drbd service on two physical hosts for recovery my drbd service = HA lost.
I need that the drbd is up automatically when this is down.
( Heartbeat ? Pacemaker ? What's actions ? )
In the second part, what is the best solution for my network configuration ? Dummy interfaces ?
thanks per advance !
Edit :
I think at this for the network configuration :
What do you think to this ?
Thanks
I have a proxmox cluster with :
- Physical host A
- Physical host B
- Virtual host C for HA
Between Physical hosts, I have a lvm drbd shared for the VM (Primary/Primary; Sync is OK).
For each physical host, have two NIC.
This NICs is in bond0 and the bond0(-rr) is associate to vmbr0 at 192.168.10.1/24. (And 192.168.10.2/24 for the second host).
Now for the VMs, we need severals networks.
Indeed, we have :
- 192.168.10.1/24 (X-10) for the network admin (IP to physicals hosts and VMs eth)
- 192.168.20.1/24 (Y-20) and 192.168.30.1 (Z-30) for the VMs.
Each network is in differents vlans(10-20-30).
VMs (one eth per network) communicate on all network between all physical hosts but the server have only 2 NIC.
For me, the physical host configuration is :
==========
HOST A
==========
Code:
auto lo
iface lo inet loopback
# Eth interfaces
iface eth0 inet manual
iface eth1 inet manual
# Bond interfaces
auto bond0
iface bond0 inet manual
slaves eth0 eth1
bond_miimon 100
bond_mode balance-rr
# vmbr interfaces
auto vmbr0
iface vmbr0 inet static
address 192.168.10.3
netmask 255.255.255.0
gateway 192.168.10.1
bridge_ports bond0 vlan10
bridge_stp off
bridge_fd 0
auto vmbr1
iface vmbr1 inet static
address 192.168.20.3
netmask 255.255.255.0
bridge_ports vlan20
bridge_stp off
bridge_fd 0
post-up echo 1 > /proc/sys/net/ipv4/ip_forward
post-up iptables -t nat -A POSTROUTING -s '192.168.20.0/24' -o bond0 -j MASQUERADE
post-down iptables -t nat -D POSTROUTING -s '192.168.20.0/24' -o bond0 -j MASQUERADE
auto vmbr2
iface vmbr2 inet static
address 192.168.30.3
netmask 255.255.255.0
bridge_ports vlan30
bridge_stp off
bridge_fd 0
post-up echo 1 > /proc/sys/net/ipv4/ip_forward
post-up iptables -t nat -A POSTROUTING -s '192.168.30.0/24' -o bond0 -j MASQUERADE
post-down iptables -t nat -D POSTROUTING -s '192.168.30.0/24' -o bond0 -j MASQUERADE
# VLAN
auto vlan10
iface vlan10 inet manual
vlan_raw_device bond0
auto vlan20
iface vlan20 inet manual
vlan_raw_device bond0
auto vlan30
iface vlan30 inet manual
vlan_raw_device bond0
==========
HOST B
==========
Code:
auto lo
iface lo inet loopback
# Eth interfaces
iface eth0 inet manual
iface eth1 inet manual
# Bond interfaces
auto bond0
iface bond0 inet manual
slaves eth0 eth1
bond_miimon 100
bond_mode balance-rr
# vmbr interfaces
auto vmbr0
iface vmbr0 inet static
address 192.168.10.4
netmask 255.255.255.0
gateway 192.168.10.1
bridge_ports bond0 vlan10
bridge_stp off
bridge_fd 0
auto vmbr1
iface vmbr1 inet static
address 192.168.20.4
netmask 255.255.255.0
bridge_ports vlan20
bridge_stp off
bridge_fd 0
post-up echo 1 > /proc/sys/net/ipv4/ip_forward
post-up iptables -t nat -A POSTROUTING -s '192.168.20.0/24' -o bond0 -j MASQUERADE
post-down iptables -t nat -D POSTROUTING -s '192.168.20.0/24' -o bond0 -j MASQUERADE
auto vmbr2
iface vmbr2 inet static
address 192.168.30.4
netmask 255.255.255.0
bridge_ports vlan30
bridge_stp off
bridge_fd 0
post-up echo 1 > /proc/sys/net/ipv4/ip_forward
post-up iptables -t nat -A POSTROUTING -s '192.168.30.0/24' -o bond0 -j MASQUERADE
post-down iptables -t nat -D POSTROUTING -s '192.168.30.0/24' -o bond0 -j MASQUERADE
# VLAN
auto vlan10
iface vlan10 inet manual
vlan_raw_device bond0
auto vlan20
iface vlan20 inet manual
vlan_raw_device bond0
auto vlan30
iface vlan30 inet manual
vlan_raw_device bond0
#######################
The configuration doesn't work.
For the drbd cluster, I have a desync. Impact is if we migrate a VM to the second host while the desync, the VM not boot and have : "/Booting from Hard Disk ; Boot failed: not a bootable disk".
So, I shall stop the VM for stop the use ressource, and restart drbd service on two physical hosts for recovery my drbd service = HA lost.
I need that the drbd is up automatically when this is down.
( Heartbeat ? Pacemaker ? What's actions ? )
In the second part, what is the best solution for my network configuration ? Dummy interfaces ?
thanks per advance !
Edit :
I think at this for the network configuration :
Code:
auto lo
iface lo inet loopback
# Eth interfaces
iface eth0 inet manual
iface eth1 inet manual
# Bond interfaces
auto bond0
iface bond0 inet manual
slaves eth0 eth1
bond_miimon 100
bond_mode balance-rr
# VLAN
auto bond0.2
iface bond0.2 inet manual
vlan-raw-device bond0
auto bond0.3
iface bond0.3 inet manual
vlan-raw-device bond0
auto bond0.4
iface bond0.4 inet manual
vlan-raw-device bond0
# vmbr interfaces
auto vmbr0
iface vmbr0 inet static
address 192.168.10.3
netmask 255.255.255.0
gateway 192.168.10.1
bridge_ports bond0.2
bridge_stp off
bridge_fd 0
auto vmbr1
iface vmbr1 inet static
address 192.168.20.3
netmask 255.255.255.0
bridge_ports bond0.3
bridge_stp off
bridge_fd 0
auto vmbr2
iface vmbr2 inet static
address 192.168.30.3
netmask 255.255.255.0
bridge_ports bond0.4
bridge_stp off
bridge_fd 0
What do you think to this ?
Thanks
Last edited: