Hi,
I am trying to configure my Proxmox VE server to use Bonding, VLANs and make the VMs/CTs run in "routed" mode.
I have successfully configured the Proxmox VE host to use bonding and VLAN on the host. However, I am currently stuck configuring vmbr0 interface for routed mode.
I have used following resources:
https://pve.proxmox.com/wiki/Network_Configuration
https://help.ovhcloud.com/csm/en-de...?id=kb_article_view&sysparm_article=KB0043913
https://pve.proxmox.com/pve-docs/chapter-sysadmin.html#sysadmin_network_vlan
First of all, my relevant interfaces and IPs:
eno1, eno2 are physical interfaces that are bonded together. eno2 is the master. The bond itself is running in active-backup mode.
The bond has VLAN 10 configured, with the public IP of the host itself. Let's say in this example the host's public IP and subnet is "123.123.123.123/22" and the gateway is "123.123.123.1".
This part works great. The host has connection and the VLAN and the failover works.
Now comes the part where I am stuck.
I am trying to configure my VMs / CTs so they run in the routed mode and not use bridging at all.
The VMs / CTs run in the same subnet in the host, but could also use other subnets as well (but for reducing complexity, lets stick to the network above).
I tried to configure it like in
https://help.ovhcloud.com/csm/en-de...?id=kb_article_view&sysparm_article=KB0043913.
First issue:
If I configure it via vmbr0, the CTs can not reach the gateway 192.168.0.1. Then I tried to configure vmbr0v10 - I can not even choose this Bridge in the CT configuration.
I do not want to configure the VLANs on the CT itself.
My /etc/network/interfaces:
With the first configuration (vmbr0) I was able to reach the CT from the host using `ping -I vmbr0 CT_IP`. But the CT was not able to ping 192.168.0.1.
The CT network configuration was this:
What am I missing? Can anyone point me into the right direction? Is this setup too complex and should I perhaps use Open vSwitch? Any advice? Thank you!
I am trying to configure my Proxmox VE server to use Bonding, VLANs and make the VMs/CTs run in "routed" mode.
I have successfully configured the Proxmox VE host to use bonding and VLAN on the host. However, I am currently stuck configuring vmbr0 interface for routed mode.
I have used following resources:
https://pve.proxmox.com/wiki/Network_Configuration
https://help.ovhcloud.com/csm/en-de...?id=kb_article_view&sysparm_article=KB0043913
https://pve.proxmox.com/pve-docs/chapter-sysadmin.html#sysadmin_network_vlan
First of all, my relevant interfaces and IPs:
eno1, eno2 are physical interfaces that are bonded together. eno2 is the master. The bond itself is running in active-backup mode.
The bond has VLAN 10 configured, with the public IP of the host itself. Let's say in this example the host's public IP and subnet is "123.123.123.123/22" and the gateway is "123.123.123.1".
This part works great. The host has connection and the VLAN and the failover works.
Now comes the part where I am stuck.
I am trying to configure my VMs / CTs so they run in the routed mode and not use bridging at all.
The VMs / CTs run in the same subnet in the host, but could also use other subnets as well (but for reducing complexity, lets stick to the network above).
I tried to configure it like in
https://help.ovhcloud.com/csm/en-de...?id=kb_article_view&sysparm_article=KB0043913.
First issue:
If I configure it via vmbr0, the CTs can not reach the gateway 192.168.0.1. Then I tried to configure vmbr0v10 - I can not even choose this Bridge in the CT configuration.
I do not want to configure the VLANs on the CT itself.
My /etc/network/interfaces:
Bash:
auto lo
iface lo inet loopback
up echo "1" > /proc/sys/net/ipv4/ip_forward
up echo "1" > /proc/sys/net/ipv4/conf/bond0/proxy_arp
up echo "1" > /proc/sys/net/ipv4/conf/bond0.10/proxy_arp
auto eno1
iface eno1 inet manual
bond-master bond0
auto eno2
iface eno2 inet manual
bond-master bond0
iface ens2f0 inet manual
iface ens2f1 inet manual
auto bond0
iface bond0 inet manual
bond-slaves eno1 eno2
bond-miimon 100
bond-mode active-backup
bond-primary eno2
bond-downdelay 400
bond-updelay 10000
auto bond0.10
iface bond0.10 inet static
address 123.123.123.123/22
gateway 123.123.123.1
# My first try - I can select the bridge in the VM, but the CTs have no connection to the gateway - the host can reach the CTs though over vmbr0.
#auto vmbr0
#iface vmbr0 inet static
# address 192.168.0.1/24
# bridge-ports none
# bridge-stp off
# bridge-fd 0
# up ip route add 123.123.120.0/22 dev vmbr0
# My second try. Can not even select the bridge in the VM
auto vmbr0v10
iface vmbr0v10 inet static
address 192.168.0.1/24
bridge-ports none
bridge-stp off
bridge-fd 0
up ip route add 123.123.120.0/22 dev vmbr0v10
With the first configuration (vmbr0) I was able to reach the CT from the host using `ping -I vmbr0 CT_IP`. But the CT was not able to ping 192.168.0.1.
The CT network configuration was this:
Code:
IP: public ip of the CT, e.g. 123.123.120.25/32
Gateway: 192.168.0.1
Bridge: vmbr0
What am I missing? Can anyone point me into the right direction? Is this setup too complex and should I perhaps use Open vSwitch? Any advice? Thank you!