One bridge multiple public subnet for containers

Mihai Pencea

Renowned Member
Jun 25, 2016
3
0
66
45
Hello,

At this moment i use proxmox 3.4 where i have several OpenVZ CT with different ips from 3 plublic subnets .
I try to migrate them now on a new server with proxmox 4.4 , CT i migrated has ip from 89.39.x.x subnet( proxmox ip. vmbr0 has 195.178.x.x )and has no internet , what is the solution for this case ?

Also migrated a CT with ip from 195.178.x.x which works perfectly.

On proxmox 3.4 i have enabled neighbor devs in /etc/vz/vz.conf to have them work.

Any suggestio/solution would be much appreciated.
 
At this moment i use proxmox 3.4 where i have several OpenVZ CT with different ips from 3 plublic subnets .

How is your host connected to these subnets? With 3 physical networks? Or just one (as it happens in some cases when you have equipment from a hoster)?

I try to migrate them now on a new server with proxmox 4.4 , CT i migrated has ip from 89.39.x.x subnet( proxmox ip. vmbr0 has 195.178.x.x )and has no internet , what is the solution for this case ?

Bridge it to the physical NIC on the host where the IP address has to be connected to. The configuration depends on the provider's (hoster's) architecture.

It may help if you post your old /etc/network/interfaces from both the Host and you CTs as well as the openvz configration files.
 
1 physical network , own network , one lan with all subnets , every subnet has own gw, everything is directly connected

# network interface settings

auto lo

iface lo inet loopback


iface eth0 inet manual



auto vmbr0

iface vmbr0 inet static

address 89.39.x.x

netmask 255.255.255.0

gateway 89.39.x.x

bridge_ports eth0

bridge_stp off

bridge_fd 0

openvz ct conf file:
/etc/vz/conf# cat 104.conf

ONBOOT="yes"


PHYSPAGES="0:1024M"

SWAPPAGES="0:1024M"

KMEMSIZE="465M:512M"

DCACHESIZE="232M:256M"

LOCKEDPAGES="512M"

PRIVVMPAGES="unlimited"

SHMPAGES="unlimited"

NUMPROC="unlimited"

VMGUARPAGES="0:unlimited"

OOMGUARPAGES="0:unlimited"

NUMTCPSOCK="unlimited"

NUMFLOCK="unlimited"

NUMPTY="unlimited"

NUMSIGINFO="unlimited"

TCPSNDBUF="unlimited"

TCPRCVBUF="unlimited"

OTHERSOCKBUF="unlimited"

DGRAMRCVBUF="unlimited"

NUMOTHERSOCK="unlimited"

NUMFILE="unlimited"

NUMIPTENT="unlimited"


# Disk quota parameters (in form of softlimit:hardlimit)

DISKSPACE="10G:11G"

DISKINODES="2000000:2200000"

QUOTATIME="0"

QUOTAUGIDLIMIT="0"


# CPU fair scheduler parameter

CPUUNITS="1000"

CPUS="1"

HOSTNAME="server1.leadingedgecontact.com"

NAMESERVER="8.8.8.8 8.8.4.4"

IP_ADDRESS="195.178.x.x 195.178.x.x 195.178.x.x 195.178.x.x 195.178.x.x"

VE_ROOT="/var/lib/vz/root/$VEID"

VE_PRIVATE="/var/lib/vz/private/104"

OSTEMPLATE="centos-6-standard_6.3-1_amd64.tar.gz"

CAPABILITY=" NET_ADMIN:eek:n"

Global openvz conf:

## Global parameters

VIRTUOZZO=yes

LOCKDIR=/var/lib/vz/lock

DUMPDIR=/var/lib/vz/dump

VE0CPUUNITS=1000


## Logging parameters

LOGGING=yes

LOGFILE=/var/log/vzctl.log

LOG_LEVEL=0

VERBOSE=0


## Disk quota parameters

DISK_QUOTA=yes

VZFASTBOOT=no


# Disable module loading. If set, vz initscript does not load any modules.

#MODULES_DISABLED=yes


# The name of the device whose IP address will be used as source IP for CT.

# By default automatically assigned.

#VE_ROUTE_SRC_DEV="eth0"


# Controls which interfaces to send ARP requests and modify ARP tables on.

NEIGHBOUR_DEVS=all


## Fail if there is another machine in the network with the same IP

ERROR_ON_ARPFAIL="no"


## Template parameters

TEMPLATE=/var/lib/vz/template


## Defaults for containers

VE_ROOT=/var/lib/vz/root/$VEID

VE_PRIVATE=/var/lib/vz/private/$VEID


## Filesystem layout for new CTs: either simfs (default) or ploop

#VE_LAYOUT=ploop


## Load vzwdog module

VZWDOG="no"


## IPv4 iptables kernel modules to be enabled in CTs by default

IPTABLES="ipt_REJECT ipt_recent ipt_owner ipt_REDIRECT ipt_tos ipt_TOS ipt_LOG ip_conntrack ipt_limit ipt_multiport iptable_filter iptable_mangle ipt_TCPMSS ipt_tcpmss ipt_ttl ipt_length ipt_state iptable_nat ip_nat_ftp"

## IPv4 iptables kernel modules to be loaded by init.d/vz script

IPTABLES_MODULES="$IPTABLES"


## Enable IPv6

IPV6="yes"


## IPv6 ip6tables kernel modules

IP6TABLES="ip6_tables ip6table_filter ip6table_mangle ip6t_REJECT"
 
1 physical network , own network , one lan with all subnets , every subnet has own gw, everything is directly connected



openvz ct conf file:


Global openvz conf:


Bridge all virtual container NICs to vmbr0 and set gateway and netmask accordingly (via WebGUI or inside the containers).
 
Doesnt work, i did that , if u want ican make a test server and give u access to play with .

If you send a packet from outside to one of the 195.178.x.x addresses (the first is usually an ARP request) - does it arrive at the host's physical NIC (eth0)? Check via tcpdump.

And if not: where should it arrive then? If not sure, ask your provider!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!