Multi VM Multi VLAN Set Up

danpez

New Member
Jul 22, 2015
6
0
1
Hi All,

I'm new to Proxmox. I've spent the last few days on Google and this forum trying to figure out how to get this to work but no joy, so apologies for probably bringing up something that is most likely covered, but I'm missing something.

I have a single machine with 1 LAN port running Proxmox v3.4. It's connected into a Cisco small biz switch and using the default bridge I can access all hosts as they are set with an IP in the default VLAN. The switching side I'm comfortable with, at this stage I can't get the hosts to pick up the interfaces at all so far.

One host will only have 1 interface in this default VLAN so that's all good.

The other two, I want to have 2 VLANs, so they each have an interface in the default VLAN and one other.

VLAN1 gateway is 10.1.0.1 - Internet gateway

Host 1 ------- VLAN1 only - 10.1.0.2
Host 2 ------- VLAN1 - 10.1.0.3
------- VLAN2 - 10.2.0.1
Host 3 ------- VLAN1 - 10.1.0.4
------- VLAN3 - 10.3.0.1

Host 2 and 3 will be the gateway for VLANs 2 and 3. They will reply on those but route out VLAN1 to the Internet gateway.

Is this possible? I'm sure it is, but I can seem to get this working on 1 LAN yet.

I've tried adding them all on the pve host to select then for each instance, but after configuring and rebooting, stuff is missing or not recognised. I must be doing something wrong.

Incidentally, 1 host is a container, one is an ISO and the other is a converted VMware to qcow2 which works fine (managed to convert that OK and get it working!). Not sure if that matters.

Appreciate any help!
Cheers
 
Unless i am missing something this seems too simple not to work. I am not sure what you meant by stuff is missing and whats not recognized.
Did you create separate bridges for those VLANS? Post the /etc/network/interfaces for all 3 hosts if you can and lets see if we can help you get i going.
 
Yes, apologies for the noob. I'm sure I'm doing something very simple, very wrong!

I was assuming this configuration should be on the pve host? Then each bridge can be selected by each VM or Container as a network device or whatever? One of the VMs is a Mikrotik ISO so it is not a linux host. The other is a Debian BIND9 build. So I can choose the network device in the Proxmox WebGUI and it will start up as required.

What was also confusing me somewhat is the changes described in the documentation between versions -- eg: whether or not a bridge is needed for each VLAN or if they can be created using vconfig without bridges.

Here is the pve interfaces and ifconfig at this moment. I disabled all the bridges as while they came up in ifconfig on the pve host, interfaces were appearing and disappearing in the Proxmox WebGUI. The IP addresses are different, but it's the same logic. I had auto cmds for each bridge after the loopback, but removed them.

-------------------------------------------------
root@pve:~# cat /etc/network/interfaces
# network interface settings
auto lo
iface lo inet loopback

#iface eth0 inet static
# address 10.0.10.31
# netmask 255.255.255.0
# gateway 10.0.10.1

#iface eth0.1 inet manual
# vlan-raw-device eth0

#iface eth0.2 inet manual
# vlan-raw-device eth0

#iface eth0.3 inet manual
# vlan-raw-device eth0

auto vmbr0
iface vmbr0 inet static
address 10.0.10.31
netmask 255.255.255.0
gateway 10.0.10.1
bridge_ports eth0
bridge_stp off
bridge_fd 0

#iface vmrag1 inet static
# address 10.0.10.98
# netmask 255.255.255.0
# gateway 10.0.10.1
# bridge_ports eth0.1
# bridge_stp off
# bridge_fd 0

#iface vmmt1 inet static
# address 10.0.10.100
# netmask 255.255.255.0
# gateway 10.0.10.1
# bridge_ports eth0.1
# bridge_stp off
# bridge_fd 0

#iface vmdns1 inet static
# address 10.0.10.99
# netmask 255.255.255.0
# gateway 10.0.10.1
# bridge_ports eth0.1
# bridge_stp off
# bridge_fd 0

#iface vmmt2 inet static
# address 10.2.10.100
# netmask 255.255.255.0
# bridge_ports eth0.2
# bridge_stp off
# bridge_fd 0

#iface vmdns3 inet static
# address 172.16.25.99
# netmask 255.255.255.0
# bridge_ports eth0.3
# bridge_stp off
# bridge_fd 0

-----------------------------------

root@pve:~# ifconfig
eth0 Link encap:Ethernet HWaddr 00:e0:4c:68:43:04
inet6 addr: fe80::2e0:4cff:fe68:4304/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:1287132 errors:0 dropped:0 overruns:0 frame:0
TX packets:817579 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:565939231 (539.7 MiB) TX bytes:613228011 (584.8 MiB)

lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:714164 errors:0 dropped:0 overruns:0 frame:0
TX packets:714164 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:548593961 (523.1 MiB) TX bytes:548593961 (523.1 MiB)

venet0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00
inet6 addr: fe80::1/128 Scope:Link
UP BROADCAST POINTOPOINT RUNNING NOARP MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:3 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)

vmbr0 Link encap:Ethernet HWaddr 00:e0:4c:68:43:04
inet addr:10.0.10.31 Bcast:10.0.10.255 Mask:255.255.255.0
inet6 addr: fe80::2e0:4cff:fe68:4304/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:1251170 errors:0 dropped:0 overruns:0 frame:0
TX packets:813866 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:543764293 (518.5 MiB) TX bytes:612983041 (584.5 MiB)

root@pve:~#


------------------------------
 
Last edited:
On mine, I usually do similar to:

Code:
# network interface settings
auto lo
iface lo inet loopback


iface eth0 inet manual


iface eth1 inet manual


auto vmbr0
iface vmbr0 inet static
        address  192.168.6.10
        netmask  255.255.255.0
        gateway  192.168.6.1
        bridge_ports eth1
        bridge_stp off
        bridge_fd 0


auto vmbr15
iface vmbr15 inet manual
        bridge_ports eth0.15
        bridge_stp off
        bridge_fd 0


auto vmbr11
iface vmbr11 inet manual
        bridge_ports eth0.11
        bridge_stp off
        bridge_fd 0

The vmbr's I just number based upon the tagged vlan they will communicate with. In this example, eth1 is connected to the proxmox management network and is not used for vm guests at all. The two vlans that do have guests connected to them are not assigned an ip as no one on those vlans should be allowed to communicate with the host in any way.
 
Hmm. So I've tried that, but I can't seem to select the bridges in the GUI. I configured as below, restarting networking and rebooted the pve.

Code:
# network interface settings
auto lo
iface lo inet loopback


iface eth0 inet manual


#iface eth0.1 inet manual
#    vlan-raw-device eth0


#iface eth0.2 inet manual
#    vlan-raw-device eth0


#iface eth0.3 inet manual
#    vlan-raw-device eth0


auto vmbr0
iface vmbr0 inet static
    address  10.0.10.31
    netmask  255.255.255.0
    gateway  10.0.10.1
    bridge_ports eth0
    bridge_stp off
    bridge_fd 0


auto vmrag1
iface vmrag1 inet manual
    bridge_ports eth0.1
    bridge_stp off
    bridge_fd 0


auto vmmt1
iface vmmt1 inet manual
    bridge_ports eth0.1
    bridge_stp off
    bridge_fd 0


auto vmdns1
iface vmdns1 inet manual
    bridge_ports eth0.1
    bridge_stp off
    bridge_fd 0


auto vmmt2
iface vmmt2 inet manual
    bridge_ports eth0.2
    bridge_stp off
    bridge_fd 0


auto vmdns3
iface vmdns3 inet static
    bridge_ports eth0.3
    bridge_stp off
    bridge_fd 0




root@pve:~# ifconfig
eth0      Link encap:Ethernet  HWaddr 00:e0:4c:68:43:04  
          inet6 addr: fe80::2e0:4cff:fe68:4304/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:4747 errors:0 dropped:0 overruns:0 frame:0
          TX packets:3657 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:1951001 (1.8 MiB)  TX bytes:2153291 (2.0 MiB)


eth0.1    Link encap:Ethernet  HWaddr 00:e0:4c:68:43:04  
          inet6 addr: fe80::2e0:4cff:fe68:4304/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:21 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:0 (0.0 B)  TX bytes:1606 (1.5 KiB)


eth0.2    Link encap:Ethernet  HWaddr 00:e0:4c:68:43:04  
          inet6 addr: fe80::2e0:4cff:fe68:4304/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:21 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:0 (0.0 B)  TX bytes:1606 (1.5 KiB)


lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:3202 errors:0 dropped:0 overruns:0 frame:0
          TX packets:3202 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:2208582 (2.1 MiB)  TX bytes:2208582 (2.1 MiB)


tap100i0  Link encap:Ethernet  HWaddr ea:84:ee:6a:63:18  
          inet6 addr: fe80::e884:eeff:fe6a:6318/64 Scope:Link
          UP BROADCAST RUNNING PROMISC MULTICAST  MTU:1500  Metric:1
          RX packets:599 errors:0 dropped:0 overruns:0 frame:0
          TX packets:977 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:500 
          RX bytes:60285 (58.8 KiB)  TX bytes:143442 (140.0 KiB)


tap102i0  Link encap:Ethernet  HWaddr 12:72:39:98:9e:b9  
          inet6 addr: fe80::1072:39ff:fe98:9eb9/64 Scope:Link
          UP BROADCAST RUNNING PROMISC MULTICAST  MTU:1500  Metric:1
          RX packets:47 errors:0 dropped:0 overruns:0 frame:0
          TX packets:940 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:500 
          RX bytes:3893 (3.8 KiB)  TX bytes:83766 (81.8 KiB)


venet0    Link encap:UNSPEC  HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00  
          inet6 addr: fe80::1/128 Scope:Link
          UP BROADCAST POINTOPOINT RUNNING NOARP  MTU:1500  Metric:1
          RX packets:112 errors:0 dropped:0 overruns:0 frame:0
          TX packets:112 errors:0 dropped:3 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:21874 (21.3 KiB)  TX bytes:13189 (12.8 KiB)


vmbr0     Link encap:Ethernet  HWaddr 00:e0:4c:68:43:04  
          inet addr:10.0.10.31  Bcast:10.0.10.255  Mask:255.255.255.0
          inet6 addr: fe80::2e0:4cff:fe68:4304/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:4768 errors:0 dropped:0 overruns:0 frame:0
          TX packets:3030 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:1832169 (1.7 MiB)  TX bytes:2106142 (2.0 MiB)


vmdns1    Link encap:Ethernet  HWaddr d2:03:12:73:71:1f  
          inet6 addr: fe80::d003:12ff:fe73:711f/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:7 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:0 (0.0 B)  TX bytes:578 (578.0 B)


vmmt1     Link encap:Ethernet  HWaddr 8a:1b:1a:c7:5c:90  
          inet6 addr: fe80::881b:1aff:fec7:5c90/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:7 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:0 (0.0 B)  TX bytes:578 (578.0 B)


vmmt2     Link encap:Ethernet  HWaddr 00:e0:4c:68:43:04  
          inet6 addr: fe80::2e0:4cff:fe68:4304/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:7 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:0 (0.0 B)  TX bytes:578 (578.0 B)


vmrag1    Link encap:Ethernet  HWaddr 00:e0:4c:68:43:04  
          inet6 addr: fe80::2e0:4cff:fe68:4304/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:7 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:0 (0.0 B)  TX bytes:578 (578.0 B)


root@pve:~#

Here is a screenshot of the GUI:

screensh0t.png
 
A. Only put an IP address on the vmbr0(the one you use to connect to Proxmox)

B. You do not need IP addresses on other bridges

C. Make a vmbr that isn't connected to a eth device and you will have a VM only nic.

From what you have described that you want, you don't need anything more than 2 bridges. One with eth0 on it, and another for internal VM communication only. You can just put a vlan id directly on the nic you assign to the VM. You have made this way more complicated than it is.

EDIT:
Now that I am back on my PC, this is what I would do in your situation:

Code:
[COLOR=#333333][FONT=Verdana]auto vmbr0[/FONT][/COLOR]
[COLOR=#333333][FONT=Verdana]iface vmbr0 inet static[/FONT][/COLOR]
[COLOR=#333333][FONT=Verdana]address 10.0.10.31[/FONT][/COLOR]
[COLOR=#333333][FONT=Verdana]netmask 255.255.255.0[/FONT][/COLOR]
[COLOR=#333333][FONT=Verdana]gateway 10.0.10.1[/FONT][/COLOR]
[COLOR=#333333][FONT=Verdana]bridge_ports eth0[/FONT][/COLOR]
[COLOR=#333333][FONT=Verdana]bridge_stp off[/FONT][/COLOR]
[COLOR=#333333][FONT=Verdana]bridge_fd 0[/FONT][/COLOR]

Then add another VMBR (through the GUI or /etc/network/interfaces.new).
Do not give that VMBR an IP address, and do not attach it to any device.
When creating your VM or Container, you can select VMBR0 and add a VLAN tag to it, if that VM/Container needs access outside of the Proxmox node. If it just needs internal only access, give it the other VMBR you created and optionally tag it with a VLANID also. Your internal only VMBR can be used just like a physical switch for all your VMs. You can set up a router distro on VBR0 for WAN and your other VMBR for LAN and then everything else in the Proxmox node can use that LAN VMBR coming from your router VM as it's sole NIC. Your router VM will handle all the routing and firewalling then.
 
Last edited:
I need the VMs on the host to communicate with equipment off the host though, it's not just internal comms. This is why I was trying to assign the VLANs as sub-interfaces on the physical interface?
 
I need the VMs on the host to communicate with equipment off the host though, it's not just internal comms. This is why I was trying to assign the VLANs as sub-interfaces on the physical interface?
But you can just pass the VLAN tag right on your VMBR0. You do not need to create sub interfaces. When you use the VLAN tag it actually mimics that sub interface.

Can you provide a network diagram? You have a router that handles the VLAN traffic, yes? You simply tag the VMBR with the VLAN ID you want, and it will put the VM on that VLAN.
 
Try the following configuration for Host 2 and 3 for VLAN2 and 3:
Host #2
======
auto vlan2
iface vlan2 inet manual

auto vmbr2
iface vmbr2 inet static
address 10.2.0.1
netmask 255.255.255.0
bridge_ports vlan2
bridge_stp off
bridge_fd 0

Host #3
======
auto vlan3
iface vlan3 inet manual

auto vmbr3
iface vmbr3 inet static
address 10.3.0.1
netmask 255.255.255.0
bridge_ports vlan3
bridge_stp off
bridge_fd 0

After you have configured /etc/network/interfaces, run the command to start both vlan and vmbr interfaces:
#ifup vlan2
#ifup vmbr2

Or you can just reboot both nodes.

Host 2 and 3 will be the gateway for VLANs 2 and 3. They will reply on those but route out VLAN1 to the Internet gateway.
From your message it sounds like you want both Vlan2 and 3 to go out through Vlan1 for internet? You can use static routing or create VLAN in your firewall for tag 2 and 3. So that each VLAN has their own gateway. Or create a virtual firewall with 4 virtual NICs. 1 vNIC for internet facing and other 3 for Vlan 1, 2 and 3.

Incidentally, 1 host is a container, one is an ISO and the other is a converted VMware to qcow2 which works fine (managed to convert that OK and get it working!). Not sure if that matters.
Not sure what you meant by host. did you mean VM?
 
Try the following configuration for Host 2 and 3 for VLAN2 and 3:
Host #2
======
auto vlan2
iface vlan2 inet manual

auto vmbr2
iface vmbr2 inet static
address 10.2.0.1
netmask 255.255.255.0
bridge_ports vlan2
bridge_stp off
bridge_fd 0

Host #3
======
auto vlan3
iface vlan3 inet manual

auto vmbr3
iface vmbr3 inet static
address 10.3.0.1
netmask 255.255.255.0
bridge_ports vlan3
bridge_stp off
bridge_fd 0

After you have configured /etc/network/interfaces, run the command to start both vlan and vmbr interfaces:
#ifup vlan2
#ifup vmbr2

Or you can just reboot both nodes.


From your message it sounds like you want both Vlan2 and 3 to go out through Vlan1 for internet? You can use static routing or create VLAN in your firewall for tag 2 and 3. So that each VLAN has their own gateway. Or create a virtual firewall with 4 virtual NICs. 1 vNIC for internet facing and other 3 for Vlan 1, 2 and 3.


Not sure what you meant by host. did you mean VM?
Special vlan interfaces are no longer required in Proxmox. Every kvm and vz can get a vlan tag on it.

This configuration appears to be overly complicated for what the op has requested.

It's as simple as having one interface with the vmbr0, and tag whatever vlans need to be tagged. If a VM needs a second nic attached, you just attach it to vmbr0 and tag the vlan
 
Special vlan interfaces are no longer required in Proxmox. Every kvm and vz can get a vlan tag on it.

This configuration appears to be overly complicated for what the op has requested.
It's as simple as having one interface with the vmbr0, and tag whatever vlans need to be tagged. If a VM needs a second nic attached, you just attach it to vmbr0 and tag the vlan
Yep, agree with you. But this configuration worked flawlessly for us over the years in our datacenter.
We also use open vswitch which also took away the complexity of the multiple vlan interfaces. Much easier to manage a complex virtual environment.
 
Yep, agree with you. But this configuration worked flawlessly for us over the years in our datacenter.
We also use open vswitch which also took away the complexity of the multiple vlan interfaces. Much easier to manage a complex virtual environment.
I used the special vlan interfaces prior to the addition to the tags capabilities on the vz containers. It's an unnecessary complexity now. Much easier to just tag the vlan. If I want to add a vlan to my network, I can do it now without changing any network configurations on Proxmox
 
Sorry for the late reply. Yes I meant VM instead of host before apologies for the confusion!

Here is a network diagram:

VM.png

So the WLAN on the right is in VLAN2 which has a gateway of the Mikrotik router ISO.

The LAN on VLAN 3 on the bottom has a gateway of the Debian VZ which is a DNS proxy.

The routing/LAN components I have no issue with, it's just getting these VLANs / interfaces to work over the single NIC from the Proxmox host to the LAN switch which is where I'm struggling...
 
Sorry for the late reply. Yes I meant VM instead of host before apologies for the confusion!

Here is a network diagram:

View attachment 2766

So the WLAN on the right is in VLAN2 which has a gateway of the Mikrotik router ISO.

The LAN on VLAN 3 on the bottom has a gateway of the Debian VZ which is a DNS proxy.

The routing/LAN components I have no issue with, it's just getting these VLANs / interfaces to work over the single NIC from the Proxmox host to the LAN switch which is where I'm struggling...

There should be no struggle at all. Use your single VMBR0, and in your VM settings under the network interface setting, you just tag the interfaces you want on whatever VLAN you want them. It is really straight forward and there is no need for weird or special rules on your proxmox interfaces config.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!