Network questions

mackuz

New Member
Jun 26, 2016
18
0
1
48
Russia
Sorry for silly questions, I'm new to networking.

Here is my /etc/network/interfaces from the Proxmox host:
Code:
auto lo
iface lo inet loopback

iface eth0 inet manual

iface eth1 inet manual

auto vmbr0
iface vmbr0 inet static
        address  192.168.10.100
        netmask  255.255.255.0
        gateway  192.168.10.1
        network  192.168.10.0
        bridge_ports eth0
        bridge_stp off
        bridge_fd 0

auto vmbr1
iface vmbr1 inet static
        address  192.168.10.101
        netmask  255.255.255.0
        gateway  192.168.10.1
        network  192.168.10.0
        bridge_ports eth1
        bridge_stp off
        bridge_fd 0

auto vmbr2
iface vmbr2 inet manual
        bridge_ports none
        bridge_stp off
        bridge_fd 0
And here are my network settings on the terminal server (LXC, Ubuntu 16.04, XRDP):
Code:
auto eth0
iface eth0 inet static
        address 192.168.10.110
        netmask 255.255.255.0
        gateway 192.168.10.1

auto eth1
iface eth1 inet static
        address 192.168.10.111
        netmask 255.255.255.0
        gateway 192.168.10.1

auto eth2
iface eth2 inet static
Here eth0 is connected to vmbr0 and so on.

The point is to use eth0 for administrator's needs, eth1 to connect via RDP, eth2 to connect this container with the other one, with database on it, for better performance.

Sadly, I don't know how to setup eth2, so containers see each other and use this connection by default for work with databases.
 
Last edited:
Thank You for the quick answer!

Here are my new settings.

Host:
Code:
auto lo
iface lo inet loopback

iface eth0 inet manual

auto vmbr0
iface vmbr0 inet static
        address  192.168.10.100
        netmask  255.255.255.0
        gateway  192.168.10.1
        network  192.168.10.0
        bridge_ports eth0
        bridge_stp off
        bridge_fd 0

auto eth1
iface eth1 inet static
        address  192.168.10.101
        netmask  255.255.255.0
        gateway  192.168.10.1
        network  192.168.10.0

auto vmbr1
iface vmbr1 inet manual
        address 192.168.100.1
        netmask 255.255.255.0
        bridge_ports none
        bridge_stp off
        bridge_fd 0

        post-up echo 1 > /proc/sys/net/ipv4/ip_forward
        post-up   iptables -t nat -A POSTROUTING -s '192.168.100.0/24' -o eth1 -j MASQUERADE
        post-down iptables -t nat -D POSTROUTING -s '192.168.100.0/24' -o eth1 -j MASQUERADE
Here vmbr0 is connected to eth0, vmbr1 is virtual and using NAT.

LXC container:
Code:
auto eth0
iface eth0 inet static
        address 192.168.10.110
        netmask 255.255.255.0
        gateway 192.168.10.1
        network 192.168.10.0

auto eth1
iface eth1 inet static
        address 192.168.100.110
        netmask 255.255.255.0
        gateway 192.168.100.1
        network 192.168.100.0
Here eth0 is connected to vmbr0 and eth1 - to vmbr1.

Questions:
  1. Will this work? :)
  2. I can connect via SSH and RDP to 192.168.10.110, but I can't see and ping masqueraded network 192.168.100.0 from my computer and from the host. Will LXC containers see each other?
  3. Is this what I was looking for: direct connection between containers through 192.168.100.0 network, with better performance than using physical network cards?
  4. How can I ssh to other containers, if they will use only 192.168.100.0 network?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!