Physical Host with 2 NICs Each with Different Gateways

Discussion in 'Proxmox VE 1.x: Installation and configuration' started by EastonRoyce, Jul 11, 2009.

  1. EastonRoyce

    EastonRoyce New Member

    Joined:
    Jul 11, 2009
    Messages:
    5
    Likes Received:
    0
    Hi Team,

    I just started using Proxmox, and I am very impressed with it! I do have a small delimma that perhaps someone on this helpful forum can assist me with.

    My server has two physical NICs in it. Here at the office, we have two different gateways to two different ISPs. All internal IP addresses are statically assigned. Some PCs, devices, whatever, access the internet through Gateway 1 (to ISP 1) and Some PCs, devices, whatever, access the internet through Gateway 2 (to ISP 2). All are on the same network/subnet.

    In the Proxmox web interface, I configured two bridges (vmbr0 and vmbr1), one bridge for each of the interfaces (eth0 and eth1), however it seems to only let me configure one gateway (vmbr0 for example) address for one of the bridges.

    Sorry this is so long winded, but I guess my question is: Can I configure the second bridge (vmbr1) with the second gateway, and thus have different guest OS's use a specific gateway, rather than being stuck to only using the one?

    Or perhaps I have it all backwards?

    I am familiar with Linux and the command line, but I haven't attempted something like this before (we are switching from VMware server).

    I am going to have an attempt myself now. I just thought I would put the questions out there for anyone who has done this before, and for anyone else in the same situation as myself.

    Thanks!
     
  2. dietmar

    dietmar Proxmox Staff Member
    Staff Member

    Joined:
    Apr 28, 2005
    Messages:
    16,340
    Likes Received:
    286
    The Guest OS can use whatever gateway they want. This does not depend on the gateway used by the host.

    Short: You configure the gateway inside the guest.

    - Dietmar
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  3. EastonRoyce

    EastonRoyce New Member

    Joined:
    Jul 11, 2009
    Messages:
    5
    Likes Received:
    0
    Hi Dietmar,

    Thanks for your very speedy reply. I've had no success thus far.

    Rather than beat around the bush, I better come straight out and ask the obvious questions.

    What is the best way for me to configure the host NICs, so that any guest OS, can have network settings that utilize a gateway, not configured on the host?

    I've tried configuring each NIC with a separate bridge, and than creating a container attached to the bridge (vmbr0 for example) and then configuring an IP Address inside the guest OS, with no success at all.

    I must be missing something. Your assistance is most appreciated!
     
  4. EastonRoyce

    EastonRoyce New Member

    Joined:
    Jul 11, 2009
    Messages:
    5
    Likes Received:
    0
    Me Again!

    I got it working. I ended up following one of the video tutorials and bonding the two NICs together. Not that it matters, so long as a vmbr0 exists, and it contains network devices, be they a bondX or ethX.

    When you create a container, instead of choosing Virtual Network (venet), select Bridged Ethernet (veth) instead!

    In my container, I had to reconfigure ifcfg-eth0 and ifcfg-lo (I had to create ifcfg-eth0 and ifcfg-lo) and set the ifcfg-venet0 and ifcfg-venet0:0 interfaces to not activate on boot (onboot=no).

    After that, everything worked as it should! Yay and Hooray for personal persistance!
     
    #4 EastonRoyce, Jul 11, 2009
    Last edited: Jul 11, 2009
  5. hisaltesse

    hisaltesse Member

    Joined:
    Mar 4, 2009
    Messages:
    214
    Likes Received:
    0
    While you got this to work, keep in mind that venet is more secure against traffic sniffing than veth.

    So when you setup containers it is recommended to use venet.

    However if you do not want to setup your networking inside the container, you can create virtual network devices such as vmbr0:0, vmbr0:1 to which you could assign other IP ranges on different networks on the host server.

    Which means that you would have to assign one of the IPs in each range to the server. Once you do that, you can assign the container IPs from the proxmox web interface under network and never have to touch the actual container's network interface from inside.

    I hope this helps.

    Regards.


     
  6. EastonRoyce

    EastonRoyce New Member

    Joined:
    Jul 11, 2009
    Messages:
    5
    Likes Received:
    0
    Hi hisaltesse,

    Thanks for the update. You are right, and security is important for me.

    I don't mean to ask you to go to a lot of effort for me, however, how should I create the virtual network devices vmbr0:0 etc?

    Can I create them in the web interface, or do I need to use the hosts command line?

    Thanks!
     
  7. hisaltesse

    hisaltesse Member

    Joined:
    Mar 4, 2009
    Messages:
    214
    Likes Received:
    0
    Not a problem. You would edit the file /etc/network/interfaces
    and append the following at the end of your file:

    auto vmbr0:0
    iface vmbr0:0 inet static
    address xxx.xxx.xxx.xxx
    netmask 255.255.255.yyy



    Where xxx.xxx.xxx.xxx is one of the addresses in your range,
    255.255.255.yyy is your netmask.

    (sometimes you may need the gateway but you should try it without it first).

    Then you restart your network and you are done.

    This is like assigning one of your IPs from an IP ranges to your host server. Once this is done you can assign the remaining IPs of the range to each VPS through the proxmox web interface.

    Hope this helps.
    Regards.
     
  8. EastonRoyce

    EastonRoyce New Member

    Joined:
    Jul 11, 2009
    Messages:
    5
    Likes Received:
    0
    Hi again,

    Thanks hisaltesse. I managed to figure out how to change the network settings on the host (I'm new to debian) not long after my last post. Wow, configuring network interfaces in debian is so much simpler than in CentOS/Red Hat! Maybe I've been using the wrong Linux OS all this time?! :p

    After playing about and changing things in the office, I came to two solutions to my problem. It's not all too clear, what my conundrum is in the first post, so I'll give as much detail as possible for anyone else with a similiar situation.

    N.B. I'm not saying these are the only solutions, these are simply what worked best for me.

    A server (a Dell SC440), with two network interfaces. We have two ISP connections (DSL broadband). We have the one internal network range (192.168.1.0) with two gateways (1.254 and 1.221).

    I have installed Promox on the server. I wanted to be able to choose which gateway the guest containers used, instead of them being locked into using the same gateway as the host. Some of the guest OS containers will host services via the first ISP, and the others will host services via the second ISP. (obviously with firewall rules etc configured in our gateways). So the container gateways need to point to the respective ISP/Gateway

    Note: While the security advantage of venet over veth is certainly worthwhile in a hosting environment (and others as well), in my case, we have two hardware firewalls that haven't let us down yet, and there is little to no concern for the possibility of an internal hacker (there are only three of us!)

    Solution 1. Requires the most effort, but gives the most flexibility.

    I decided to configure the host with a vmbr0 that contains a bond0, that contains eth0 and eth1 (you can look at the video tutorials on pve.proxmox.com for anyone unsure of how to create a bond0) - I configured the bond0 with an IP address that matches our office network (this address doesn't matter in regards to the container, just makes the host easy to access if it's on the same network).

    Creating the network bond is meaningless (except for redundancy reasons), so long as you end up with an accessible vmbrX device, that contains either ethX or bondX, you're good to go. Bonding network devices in linux is close to using the 'join to network bridge' feature available in Windows 5.X+ when you have two similiar Ethernet network devices.

    When creating a new container in the web interface, instead of choosing Virtual Network (venet) choose Bridged Ethernet (veth) instead.

    hisaltesse pointed out in a previous post, that veth is not as secure as venet (which is true), however veth does give your guest container OS direct access to the network, in a smiliar fashion to the way VMware server can give a guest OS direct access to a physical network using Bridged Ethernet.

    Once your guest OS container is up and running, you'll need to use the "Open VNC Console" link on the container's General page, in the proxmox web interface, to get console access to the container, so that you can manually configure your network interfaces. Instead of configuring venet0:0 or similiar devices, you'll be back to configuring eth0 or similiar devices. In CentOS, these turn out to be ifcfg-eth0 and ifcfg-lo. If you are unsure of how to do this manually, I suggest googling for some answers, or in CentOS, you can use a tool (if it's available in your guest OS template) called system-config-network-tui. It's available via Yum if you don't have it.

    Easy way to get it
    1. Create a container with CentOS
    2. Assigned it an IP Address using Virtual Network (venet)
    3. Login via the VNC console or SSH and issue: yum install system-config-network-tui
    4. Logout
    5. Use the Proxmox web interface to shutdown the container
    6. Remove the Virtual Network Adapter (venet) by deleting the IP Address and clicking Save.
    7. The page will refresh and you can now select vmbr0 from the Bridged Ethernet Devices section. Click Save.
    8. Start your container
    9. Login via VNC Console and issue: system-config-network-tui
    10. You're on your own from here. Remember, you need to configure eth0 not venet0 and venet0:0 - you may want to remove these. Don't forget to set your gateway and edit your DNS
    11. Restart your network with /etc/init.d/network restart


    Solution 2. Requires less effort, doesn't necassarily give as much flexibility, but increases security.

    As hisaltesse has noted above, you can edit your /etc/network/interfaces file on the host (the server with Proxmox installed). It's actually kinda easy. This option also lets you avoid having to edit the network settings inside your guest container OS. So you can create plenty of containers, without have to edit each one! Neat!

    Remember: I have two Gateways. 1.254 and 1.221 and my server has two network interfaces.

    I could configure each of the network interfaces with two different networks. For example: NIC1: 192.168.1.0 with a gateway of 192.168.1.254 and NIC2: 192.168.4.0 with a gateway of 192.168.4.254.

    There is no point in configuring both interfaces with the same network, for example: NIC1: 192.168.1.200 with a gateway of 192.168.1.254 and NIC2: 192.168.1.205 with a gateway of 192.168.1.221. The host (and consequently any guest container OS) will always use the gateway configured in eth0). If there is a way around this, I wasn't able to figure it out.

    By configuring each NIC with a separate network (192.168.1.0 and 192.168.4.0), I can make a guest container OS use the gateway of the 4.0 network, by giving it an IP Address like: 192.168.4.54, or make a guest container OS use the gateway of the 1.0 network, by giving it an IP Address like: 192.168.1.54.

    However, we're not done yet. Here's how you do it. In the Web Interface, I again configured my network as follows. This could be done in a variety of ways, so long as you end up with a vmbr0.

    I created a bond0, that contained eth0 and eth1. I then created a vmbr0 that contained bond0. I set the IP address of vmbr0 to 192.168.1.230 - so I could access it again after reboot. I didn't configure IP Addresses anywhere else.

    After rebooting, as hisaltesse as noted above, I edited my /etc/network/interfaces file

    On the host I issused: nano /etc/network/interfaces (I love nano, I know it's for noobs but I can't get my head around vi, I've used nano for too long!)

    At the bottom of my /etc/network/interfaces file I added

    auto vmbr0:0
    iface vmbr0:0 inet static
    address 192.168.4.100
    netmask 255.255.255.0
    gateway 192.168.4.254

    auto vmbr0:1
    iface vmbr0:0 inet static
    address 192.168.1.100
    netmask 255.255.255.0
    gateway 192.168.1.254


    DNS can be configured individually for each container using the proxmox web interface, or in my case of using CentOS, by editing the resov.conf file inside my guest container OS. Do what is easiest for you. You can also use system-config-network-tui.

    Once I had modified by /etc/network/interfaces file, I saved and than restarted the network service by issuing: /etc/init.d/networking restart

    I next created guest OS containers, with IP addresses that matched the network of the gateway that I wanted to use. In this situation, the guest container OS's can also commuinicate with each other, even though they may be using different networks, thanks to the host. Handy, no?

    In the end for me, I am using Solution 1. Mainly because we are not running a hosting company, rather just running services on our network and migrating away from VMware, so configuring network settings manually in this case, isn't a big issue. In the proxmox web interface, I added the IP Address of each container, into the comments section for future reference.

    If I was to use Solution 2, in our case we would have to change the IP Address of not only the gateway, but also many other devices also currently using the 192.168.1.0 network, but using the second gateway 192.168.1.221. We would inevitibly have to implement a routing solution to allow nodes on each network (1.0 and 4.1), to communicate with devices on either network, as is the case with most devices communicating with each other now.

    As I said above, this is not necassarily the best or only solution, it is however what I was able to come up with. Comments, suggestions and improvements are welcome.

    If someone can help me get the security of using venet instead of veth (and manually configuring the containers) and be able to get each of my containers to use one gateway or the other, all on the one network, without having to edit the container network settings, well then that would be awesome!

    I would persist further at this point, but now that I have a working solution, I need to get on with the migration!

    So say we all, and good hunting.

    EastonRoyce
     
    #8 EastonRoyce, Jul 12, 2009
    Last edited: Jul 20, 2009
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice