Hi everybody, greetings to all members of this great community.
I am new of Proxmox, but after reading some papers I decided to use it as virtual evironment at OVH were I have two servers. Unfotunately my servers have just one NIC ( no private network ). So I decided to install proxmox on both proxmox but without putting them in communication ( cluster ) in order to avoid to overcharge the nic with corosync traffic.
I did the installation with first VM as firewall (OPNSense), for this reason I created the below configuration that seems to work perfectly with IPV4 and IPV6 trafic. Behind the firewall I have some VM in dmz and some in LAN with openvpn access for administration. I also decided to implement IPV6 on WAN on order to manage dns properly. Routing from VM to internet and from internet to dmz and vpn was fine. Because I have 4 additional public IP addresses, in order to use them on the WAN interface of the firewall after talking to OVH support they told me that is possible to use the on the same NIC of OPNsense firewall assigning them the same macaddress ( via OVH API is possible and I did it ). So now I have the 4 IP Failover with same macaddress. Everything seems to work fine at the beginnig than I noted some strange reactions of the VMs, now without performing ather changes, after a reboot both servers boots up but with "BLOCKING STATE" on the ethernet card eno1. Servers are not reachable from external and cannot use them. I would like to know how to solve this problem and if the problem was caused by any bud network configuration or by the macaddress manipulation to IP Failower addresses that I assegned diectly to the WAN interface or the OPNsense firewall. I would like to understand before to put servers in production in order to avoid future bud surprices. Thank in advance for your help.
Here is my /etc/network/interfaces :
# network interfaces
auto lo
iface lo inet loopback
iface eno1 inet manual
iface lo inet6 loopback
auto vmbr0
iface vmbr0 inet static
address x.19x.8x.49
netmask 255.255.255.0
gateway x.19x.8x.254
bridge-ports eno1
bridge-stp off
bridge-fd 0
port-up route add x.19x.8x.254 vmbr0
port-up route add default gw x.19x.8x.254
# Main IPv6 address
iface vmbr0 inet6 static
address 2001:xxxx:a:xxxx::1
netmask 64
# IPv6 Gateway
post-up sleep 5; /sbin/ip -6 route add 2001:xxxx:a:xxFF:FF:FF:FF:FF dev vmbr0
post-up sleep 5; /sbin/ip -6 route add default via 2001:xxxx:a:xxFF:FF:FF:FF:FF
pre-down /sbin/ip -6 route del default via 2001:xxxx:a:xxFF:FF:FF:FF:FF
pre-down /sbin/ip -6 route del 2001:xxxx:a:xxFF:FF:FF:FF:FF dev vmbr0
# Virtual switch for LAN
# (connect your firewall/router KVM instance and private LAN hosts here)
auto vmbr1
iface vmbr1 inet manual
bridge_ports none
bridge_stp off
bridge_fd 0
# Virtual switch for DMZ
# (connect your firewall/router KVM instance and DMZ hosts here)
auto vmbr2
iface vmbr2 inet manual
bridge_ports none
bridge_stp off
bridge_fd 0
bridge-vlan-aware yes
iface vmbr2 inet6 manual
bridge_ports none
bridge_stp off
bridge_fd 0
bridge-vlan-aware yes
This is my sysctl.conf :
net.ipv6.conf.all.autoconf=0
net.ipv6.conf.all.accept_ra=0
net.ipv6.conf.all.forwarding=1
I am new of Proxmox, but after reading some papers I decided to use it as virtual evironment at OVH were I have two servers. Unfotunately my servers have just one NIC ( no private network ). So I decided to install proxmox on both proxmox but without putting them in communication ( cluster ) in order to avoid to overcharge the nic with corosync traffic.
I did the installation with first VM as firewall (OPNSense), for this reason I created the below configuration that seems to work perfectly with IPV4 and IPV6 trafic. Behind the firewall I have some VM in dmz and some in LAN with openvpn access for administration. I also decided to implement IPV6 on WAN on order to manage dns properly. Routing from VM to internet and from internet to dmz and vpn was fine. Because I have 4 additional public IP addresses, in order to use them on the WAN interface of the firewall after talking to OVH support they told me that is possible to use the on the same NIC of OPNsense firewall assigning them the same macaddress ( via OVH API is possible and I did it ). So now I have the 4 IP Failover with same macaddress. Everything seems to work fine at the beginnig than I noted some strange reactions of the VMs, now without performing ather changes, after a reboot both servers boots up but with "BLOCKING STATE" on the ethernet card eno1. Servers are not reachable from external and cannot use them. I would like to know how to solve this problem and if the problem was caused by any bud network configuration or by the macaddress manipulation to IP Failower addresses that I assegned diectly to the WAN interface or the OPNsense firewall. I would like to understand before to put servers in production in order to avoid future bud surprices. Thank in advance for your help.
Here is my /etc/network/interfaces :
# network interfaces
auto lo
iface lo inet loopback
iface eno1 inet manual
iface lo inet6 loopback
auto vmbr0
iface vmbr0 inet static
address x.19x.8x.49
netmask 255.255.255.0
gateway x.19x.8x.254
bridge-ports eno1
bridge-stp off
bridge-fd 0
port-up route add x.19x.8x.254 vmbr0
port-up route add default gw x.19x.8x.254
# Main IPv6 address
iface vmbr0 inet6 static
address 2001:xxxx:a:xxxx::1
netmask 64
# IPv6 Gateway
post-up sleep 5; /sbin/ip -6 route add 2001:xxxx:a:xxFF:FF:FF:FF:FF dev vmbr0
post-up sleep 5; /sbin/ip -6 route add default via 2001:xxxx:a:xxFF:FF:FF:FF:FF
pre-down /sbin/ip -6 route del default via 2001:xxxx:a:xxFF:FF:FF:FF:FF
pre-down /sbin/ip -6 route del 2001:xxxx:a:xxFF:FF:FF:FF:FF dev vmbr0
# Virtual switch for LAN
# (connect your firewall/router KVM instance and private LAN hosts here)
auto vmbr1
iface vmbr1 inet manual
bridge_ports none
bridge_stp off
bridge_fd 0
# Virtual switch for DMZ
# (connect your firewall/router KVM instance and DMZ hosts here)
auto vmbr2
iface vmbr2 inet manual
bridge_ports none
bridge_stp off
bridge_fd 0
bridge-vlan-aware yes
iface vmbr2 inet6 manual
bridge_ports none
bridge_stp off
bridge_fd 0
bridge-vlan-aware yes
This is my sysctl.conf :
net.ipv6.conf.all.autoconf=0
net.ipv6.conf.all.accept_ra=0
net.ipv6.conf.all.forwarding=1