Networking Setup for OpenVZ and Qemu and multiple subnets

Myles McNamara

New Member
Jan 17, 2013
2
0
1
Orlando, Florida, United States
So i've been searching all day trying to figure out the best way to achieve this but still wondering what the best way to do this would be. I've found a lot of different information and have looked through Server Overflow, Proxmox Wiki, and Google and still not sure.

I've been assigned two public subnets from my provider on the same uplink port, eth1.

My current configuration:
Code:
    auto lo
    iface lo inet loopback
    
    iface eth1 inet manual

    auto vmbr0
    iface vmbr0 inet static
        address 209.x.x.42
        netmask 255.255.255.248
        gateway 209.x.x.41
        bridge_ports eth1
        bridge_stp off
        bridge_fd 0

This configuration works correctly for the subnet specified above with the OpenVZ containers using venet0, and I have yet to try Qemu yet as I ran into problems trying to get the other subnet working.

I was assigned another subnet on the same uplink port eth1:
Code:
Range: 209.x.x.96/27
Netmask: 255.255.255.224
Usable: 209.x.x.98-126

So from everything I've read online it looks like my best option is to use the routed method to support both OpenVZ and Qemu but i'm still not too sure how I should setup the network configuration to support both of these subnets. I also want to set it up so when I need to add another subnet I don't have to play around with the settings too much as I will have servers running on there I will not want to shutdown while I tinker.

Can someone please help me or point me in the right direction on what I should research to setup the network config? I've been messing with this all day and want to make sure i'm doing it correctly and not just using a workaround that may come back to bite me later on.

Thank you for any help in advance it is greatly appreciated!
 
I was assigned another subnet on the same uplink port eth1:
Code:
Range: 209.x.x.96/27
Netmask: 255.255.255.224
Usable: 209.x.x.98-126

Just create another bridge:

Code:
    auto vmbr1
    iface vmbr1 inet static
        address 209.x.x.98
        netmask 255.255.255.224
        gateway 209.x.x.97
        bridge_ports eth1
        bridge_stp off
        bridge_fd 0

Btw. what do you use eth0 for?
 
Heh, you learn something new every day. For some reason I thought there was a one-to-one relationship between vmbr interfaces and actual interfaces, and my first instinct was to create a subinterface (eth1:0) to bridge to. Good to know.
 
Just create another bridge:

Code:
    auto vmbr1
    iface vmbr1 inet static
        address 209.x.x.98
        netmask 255.255.255.224
        gateway 209.x.x.97
        bridge_ports eth1
        bridge_stp off
        bridge_fd 0

Btw. what do you use eth0 for?

Thanks for the quick reply!

I'm not sure what eth0 is being used for, I just had this server setup yesterday and still figuring everything out. Looking at the port itself though it doesn't show any RX or TX so i'm guessing it's just not being used and they plugged in the uplink to eth1 instead of eth0.

I attempted to create another bridge yesterday as you explained and set it up in the same exact way in your example. When I did so I received an error in the console saying something like "could not bridge vmbr1 to eth1 as this device is already bridged to vmbr0"

Should I be able to create multiple bridges to one device (eth1)? From what I read around online I thought it was not possible to create multiple bridges to one device but I don't know how correct that is.
 
For bridged mode on Qemu and routed on OpenVZ the configuration need to be like this (using the /29 subnet for bridge and /27 for OpenVZ routed):

Code:
    auto lo
    iface lo inet loopback

    iface eth1 inet manual

    auto vmbr0
    iface vmbr0 inet static
        address  209.x.x.42
        netmask  255.255.255.248
        gateway  209.x.x.41
        bridge_ports eth1
        bridge_stp off
        bridge_fd 0
        post-up echo 1 > /proc/sys/net/ipv4/conf/vmbr0/proxy_arp

Just setup the bridge as the /29 subnet, and then enable PROXY_ARP on that bridge for the interface vmbr0.

Using OpenVZ venet you do not need to specify the other subnet in the configuration file, you can use IPs from either the /29 or the /27 for OpenVZ. Really you can use any subnet that is routed to the server on port eth1, so adding IPs or subnets later on is easy and does not require any type of reboot!

Also change value of VE_ROUTE_SRC_DEV in /etc/vz/vz.conf to:
Code:
VE_ROUTE_SRC_DEV="vmbr0"


Voila!
 
hi folks,

I'm quite new to proxmox and I have to say that it is awesome so far (have one proxmox box running pfsense in a kvm vm as a fwall/router). however, I'm experiencing issues with another setup where I'd like to have kvm's and ct's.

the issue is that I cannot get openvz ct's to resolve outside the host box. kvm's in bridged mode work perfectly fine, but openvz ct's using routed mode are not working and I truly appreciate if some one can point me to the right direction.

here's some configuration files that are in place. basically I want to assign public ip to the openvz ct.

Code:
# cat /etc/network/interfaces 
auto lo
iface lo inet loopback

auto vmbr0
iface vmbr0 inet static
        address xx.xx.xx.18
        netmask 255.255.255.248
        gateway xx.xx.xx.22
        bridge_ports eth0
        bridge_stp off
        bridge_fd 0
        post-up echo 1 > /proc/sys/net/ipv4/conf/vmbr0/proxy_arp

routing table on the host box: (i know I should use iproute2...but it's a habit really)
Code:
# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
xx.xx.xx.21      0.0.0.0         255.255.255.255 UH    0      0        0 venet0
xx.xx.xx.16      0.0.0.0         255.255.255.248 U     0      0        0 vmbr0
0.0.0.0         xx.xx.xx.22      0.0.0.0         UG    0      0        0 vmbr0

Code:
# grep -E 'NEIGHBOUR_DEVS|VE_ROUTE_SRC_DEV' /etc/vz/vz.conf
VE_ROUTE_SRC_DEV="vmbr0"
NEIGHBOUR_DEVS=all

I've added the public ip to the openvz ct created from the official centos6 cache templates using:
Code:
vzctl set 102 --ipadd xx.xx.xx.21 --save

and have the following routing table in the ct:

Code:
# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
169.254.0.0     0.0.0.0         255.255.0.0     U     1002   0        0 venet0
0.0.0.0         0.0.0.0         0.0.0.0         U     0      0        0 venet0

and the interfaces are as follows:
Code:
# ifconfig 
lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b)

venet0    Link encap:UNSPEC  HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00  
          inet addr:127.0.0.1  P-t-P:127.0.0.1  Bcast:0.0.0.0  Mask:255.255.255.255
          UP BROADCAST POINTOPOINT RUNNING NOARP  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:64 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:0 (0.0 b)  TX bytes:5247 (5.1 KiB)

venet0:0  Link encap:UNSPEC  HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00  
          inet addr:xx.xx.xx.21  P-t-P:23.31.6.21  Bcast:23.31.6.21  Mask:255.255.255.255
          UP BROADCAST POINTOPOINT RUNNING NOARP  MTU:1500  Metric:1

I've flushed iptables on both the proxmox host box and in the openvz ct but still I cannot get the network working from within the ct. xx.xx.xx.22 is the gateway and I cannot ping it from the ct but can resolve all the kvm's and the proxmox host without a problem.

when I try to ping the gateway from the ct I get this:
Code:
# ping xx.xx.xx.22
PING xx.xx.xx.22 (xx.xx.xx.22) 56(84) bytes of data.
^C
--- xx.xx.xx.22 ping statistics ---
84 packets transmitted, 0 received, 100% packet loss, time 83776ms

and while the ping is ongoing I've ran tcpdump on the host's venet0 iface and there was no reply back:
Code:
# tcpdump -nnq -i venet0
....
07:34:06.530763 IP xx.xx.xx.21 > xx.xx.xx.22: ICMP echo request, id 25608, seq 17, length 64
07:34:07.530805 IP xx.xx.xx.21 > xx.xx.xx.22: ICMP echo request, id 25608, seq 18, length 64
07:34:08.530754 IP xx.xx.xx.21 > xx.xx.xx.22: ICMP echo request, id 25608, seq 19, length 64
07:34:09.530907 IP xx.xx.xx.21 > xx.xx.xx.22: ICMP echo request, id 25608, seq 20, length 64
07:34:10.530824 IP xx.xx.xx.21 > xx.xx.xx.22: ICMP echo request, id 25608, seq 21, length 64
07:34:11.530784 IP xx.xx.xx.21 > xx.xx.xx.22: ICMP echo request, id 25608, seq 22, length 64
07:34:12.530820 IP xx.xx.xx.21 > xx.xx.xx.22: ICMP echo request, id 25608, seq 23, length 64
07:34:13.530801 IP xx.xx.xx.21 > xx.xx.xx.22: ICMP echo request, id 25608, seq 24, length 64
07:34:14.530827 IP xx.xx.xx.21 > xx.xx.xx.22: ICMP echo request, id 25608, seq 25, length 64
07:34:15.530812 IP xx.xx.xx.21 > xx.xx.xx.22: ICMP echo request, id 25608, seq 26, length 64

I kindly ask if somebody can tell me what I'm missing here. Also, I'm really sorry if it is not Ok to reply in this thread but I found it most relevant eventhough I'm not using different subnets. If that's not the case please let me know and I'll start separate thread about the issue.

thank you
 
hi folks,

I'm quite new to proxmox and I have to say that it is awesome so far (have one proxmox box running pfsense in a kvm vm as a fwall/router). however, I'm experiencing issues with another setup where I'd like to have kvm's and ct's.

the issue is that I cannot get openvz ct's to resolve outside the host box. kvm's in bridged mode work perfectly fine, but openvz ct's using routed mode are not working and I truly appreciate if some one can point me to the right direction.

here's some configuration files that are in place. basically I want to assign public ip to the openvz ct.

Code:
# cat /etc/network/interfaces 
auto lo
iface lo inet loopback

auto vmbr0
iface vmbr0 inet static
        address xx.xx.xx.18
        netmask 255.255.255.248
        gateway xx.xx.xx.22
        bridge_ports eth0
        bridge_stp off
        bridge_fd 0
        post-up echo 1 > /proc/sys/net/ipv4/conf/vmbr0/proxy_arp

routing table on the host box: (i know I should use iproute2...but it's a habit really)
Code:
# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
xx.xx.xx.21      0.0.0.0         255.255.255.255 UH    0      0        0 venet0
xx.xx.xx.16      0.0.0.0         255.255.255.248 U     0      0        0 vmbr0
0.0.0.0         xx.xx.xx.22      0.0.0.0         UG    0      0        0 vmbr0

Code:
# grep -E 'NEIGHBOUR_DEVS|VE_ROUTE_SRC_DEV' /etc/vz/vz.conf
VE_ROUTE_SRC_DEV="vmbr0"
NEIGHBOUR_DEVS=all

I've added the public ip to the openvz ct created from the official centos6 cache templates using:
Code:
vzctl set 102 --ipadd xx.xx.xx.21 --save

and have the following routing table in the ct:

Code:
# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
169.254.0.0     0.0.0.0         255.255.0.0     U     1002   0        0 venet0
0.0.0.0         0.0.0.0         0.0.0.0         U     0      0        0 venet0

and the interfaces are as follows:
Code:
# ifconfig 
lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b)

venet0    Link encap:UNSPEC  HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00  
          inet addr:127.0.0.1  P-t-P:127.0.0.1  Bcast:0.0.0.0  Mask:255.255.255.255
          UP BROADCAST POINTOPOINT RUNNING NOARP  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:64 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:0 (0.0 b)  TX bytes:5247 (5.1 KiB)

venet0:0  Link encap:UNSPEC  HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00  
          inet addr:xx.xx.xx.21  P-t-P:23.31.6.21  Bcast:23.31.6.21  Mask:255.255.255.255
          UP BROADCAST POINTOPOINT RUNNING NOARP  MTU:1500  Metric:1

I've flushed iptables on both the proxmox host box and in the openvz ct but still I cannot get the network working from within the ct. xx.xx.xx.22 is the gateway and I cannot ping it from the ct but can resolve all the kvm's and the proxmox host without a problem.

when I try to ping the gateway from the ct I get this:
Code:
# ping xx.xx.xx.22
PING xx.xx.xx.22 (xx.xx.xx.22) 56(84) bytes of data.
^C
--- xx.xx.xx.22 ping statistics ---
84 packets transmitted, 0 received, 100% packet loss, time 83776ms

and while the ping is ongoing I've ran tcpdump on the host's venet0 iface and there was no reply back:
Code:
# tcpdump -nnq -i venet0
....
07:34:06.530763 IP xx.xx.xx.21 > xx.xx.xx.22: ICMP echo request, id 25608, seq 17, length 64
07:34:07.530805 IP xx.xx.xx.21 > xx.xx.xx.22: ICMP echo request, id 25608, seq 18, length 64
07:34:08.530754 IP xx.xx.xx.21 > xx.xx.xx.22: ICMP echo request, id 25608, seq 19, length 64
07:34:09.530907 IP xx.xx.xx.21 > xx.xx.xx.22: ICMP echo request, id 25608, seq 20, length 64
07:34:10.530824 IP xx.xx.xx.21 > xx.xx.xx.22: ICMP echo request, id 25608, seq 21, length 64
07:34:11.530784 IP xx.xx.xx.21 > xx.xx.xx.22: ICMP echo request, id 25608, seq 22, length 64
07:34:12.530820 IP xx.xx.xx.21 > xx.xx.xx.22: ICMP echo request, id 25608, seq 23, length 64
07:34:13.530801 IP xx.xx.xx.21 > xx.xx.xx.22: ICMP echo request, id 25608, seq 24, length 64
07:34:14.530827 IP xx.xx.xx.21 > xx.xx.xx.22: ICMP echo request, id 25608, seq 25, length 64
07:34:15.530812 IP xx.xx.xx.21 > xx.xx.xx.22: ICMP echo request, id 25608, seq 26, length 64

I kindly ask if somebody can tell me what I'm missing here. Also, I'm really sorry if it is not Ok to reply in this thread but I found it most relevant eventhough I'm not using different subnets. If that's not the case please let me know and I'll start separate thread about the issue.

thank you


You're missing the entry for eth0 in your /etc/network/interfaces.

Change the file to look like this and restart:

Code:
auto lo
iface lo inet loopback

auto eth0
iface eth0 inet manual

auto vmbr0
iface vmbr0 inet static
        address xx.xx.xx.18
        netmask 255.255.255.248
        gateway xx.xx.xx.22
        bridge_ports eth0
        bridge_stp off
        bridge_fd 0
        post-up echo 1 > /proc/sys/net/ipv4/conf/vmbr0/proxy_arp
 
hey guys.... still tackling with this issue.... however, what I've noticed is when I try to ping the gateway at xx.xx.xx.22 from within the openvz ct, I got this in the vm:

Code:
# tcpdump -nnq -i venet0:0

13:33:05.666643 IP xx.xx.xx.21 > xx.xx.xx.22: ICMP echo request, id 1618, seq 272, length 64
13:33:06.666653 IP xx.xx.xx.21 > xx.xx.xx.22: ICMP echo request, id 1618, seq 273, length 64
13:33:07.666652 IP xx.xx.xx.21 > xx.xx.xx.22: ICMP echo request, id 1618, seq 274, length 64
13:33:08.666704 IP xx.xx.xx.21 > xx.xx.xx.22: ICMP echo request, id 1618, seq 275, length 64
13:33:09.666599 IP xx.xx.xx.21 > xx.xx.xx.22: ICMP echo request, id 1618, seq 276, length 64
13:33:10.666649 IP xx.xx.xx.21 > xx.xx.xx.22: ICMP echo request, id 1618, seq 277, length 64

so the vm is sending icmp requests to the gateway but get no replies.

however, while the ping is ongoing from the ct to the gateway, I ran tcpdump against vmbr0 and it shows that the gateway is sending replies back to xx.xx.xx.21 but that's not getting back to the vm.

Code:
# tcpdump -nnq -i eth0 host xx.xx.xx.21

13:35:00.667663 IP xx.xx.xx.21 > xx.xx.xx.22: ICMP echo request, id 1618, seq 387, length 64
13:35:00.668237 IP xx.xx.xx.22 > xx.xx.xx.21: ICMP echo reply, id 1618, seq 387, length 64
13:35:01.667667 IP xx.xx.xx.21 > xx.xx.xx.22: ICMP echo request, id 1618, seq 388, length 64
13:35:01.668198 IP xx.xx.xx.22 > xx.xx.xx.21: ICMP echo reply, id 1618, seq 388, length 64
13:35:02.667663 IP xx.xx.xx.21 > xx.xx.xx.22: ICMP echo request, id 1618, seq 389, length 64
13:35:02.668191 IP xx.xx.xx.22 > xx.xx.xx.21: ICMP echo reply, id 1618, seq 389, length 64
13:35:03.667661 IP xx.xx.xx.21 > xx.xx.xx.22: ICMP echo request, id 1618, seq 390, length 64
13:35:03.668212 IP xx.xx.xx.22 > xx.xx.xx.21: ICMP echo reply, id 1618, seq 390, length 64
13:35:04.667667 IP xx.xx.xx.21 > xx.xx.xx.22: ICMP echo request, id 1618, seq 391, length 64
13:35:04.668188 IP xx.xx.xx.22 > xx.xx.xx.21: ICMP echo reply, id 1618, seq 391, length 64
13:35:05.667666 IP xx.xx.xx.21 > xx.xx.xx.22: ICMP echo request, id 1618, seq 392, length 6

re-checked with

Code:
iptables -t nat -L && iptables -t filter -L && iptables -t mangle -L

and have no fwall rules at all.

am blind here now. can somebody sched some light into this?

thank you in advance,

- d

EDIT: I may have missed to say that the CT is not accessible on the public ip as well.
 
Last edited:
You're missing the entry for eth0 in your /etc/network/interfaces.

Change the file to look like this and restart:

Code:
auto lo
iface lo inet loopback

auto eth0
iface eth0 inet manual

auto vmbr0
iface vmbr0 inet static
        address xx.xx.xx.18
        netmask 255.255.255.248
        gateway xx.xx.xx.22
        bridge_ports eth0
        bridge_stp off
        bridge_fd 0
        post-up echo 1 > /proc/sys/net/ipv4/conf/vmbr0/proxy_arp

I think that is not the problem here..

You need to follow this guide: http://www.revsys.com/writings/quicktips/nat.html

The iptables rules must be installed on the node where the CT is running.

For a more complete example try shorewall. There are a number of example files you can adopt.

I may miss-understand what you mean or you may missunderstood the issue I have. Basically, I have another public IP on the same subnet /29 that I want to use in a CT using venet. I have one assigned on the proxmox host and other two to 2 kvm's currently via vmbr0 and they are working just fine.

So, I guess I dont want to use NAT here as the ip I'm giving to the CT is not a private one. On the other proxmox box I already have pfsense in a kvm taking care of everything including nat.

Anyhow, I've tried what you've suggested and set postrouting and forwarding and also enabled ipv4 forwarding but still no connectivity from the ct outside of the hostbox.

this is a tcpdump against host's vmbr0:

Code:
15:40:51.734654 IP xx.xx.xx.21 > xx.xx.xx.22: ICMP echo request, id 1064, seq 45, length 64
15:40:51.735233 IP xx.xx.xx.22 > xx.xx.xx.21: ICMP echo reply, id 1064, seq 45, length 64
15:40:52.734665 IP xx.xx.xx.21 > xx.xx.xx.22: ICMP echo request, id 1064, seq 46, length 64
15:40:52.735218 IP xx.xx.xx.22 > xx.xx.xx.21: ICMP echo reply, id 1064, seq 46, length 64

this is a tcpdump against ct's venet0:

Code:
15:45:02.735652 IP xx.xx.xx.21 > xx.xx.xx.22: ICMP echo request, id 1064, seq 296, length 64
15:45:03.735664 IP xx.xx.xx.21 > xx.xx.xx.22: ICMP echo request, id 1064, seq 297, length 64
15:45:04.735670 IP xx.xx.xx.21 > xx.xx.xx.22: ICMP echo request, id 1064, seq 298, length 64
15:45:05.735659 IP xx.xx.xx.21 > xx.xx.xx.22: ICMP echo request, id 1064, seq 299, length 64
15:45:06.735654 IP xx.xx.xx.21 > xx.xx.xx.22: ICMP echo request, id 1064, seq 300, length 64

note that all kvm's and the host itself are reachable for the ct but nothing else outwards like the router which is directly connected to the Internet

really have no idea if this is a proxmox setup issue or something to do with openvz's networking model...
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!