VLANs inside VE

I install ProxMox and setup the VLAN in the web config... no go. The vlan package is installed. 8021q is listed in lsmod, the config files are in the correct locations.

This is what I did in straight Debian that worked just fine.

apt-get install vlan
modprobe 8021q
vconfig add eth0 2
ifconfig eth0.2 65.182.165.39 broadcast 65.182.165.255 netmask 255.255.254.0 up

It doesn't work on ProxMox. I even removed it from the GUI, restarted, and tried doing it via the CLI.. nothing.
 
No idea. VLAN works for other people.

Is there any paid support available?

Being told, "I don't know, I don't feel like looking into it, go jump off a cliff." is getting kinda old.

Surely I can't have encountered 2 - 3 unexplained out there problems that no one else has had.

Your site claims free enterprise class support. I'm kinda wondering what enterprise provides this kind of support.

Free enterprise class support

Unlimited support incidents via E-mail based trouble ticket system and forum

Direct system support via the Internet (SSH) for the resolution of specific issues and system optimization
 
Is there any paid support available?

Being told, "I don't know, I don't feel like looking into it, go jump off a cliff." is getting kinda old.

Surely I can't have encountered 2 - 3 unexplained out there problems that no one else has had.

Your site claims free enterprise class support. I'm kinda wondering what enterprise provides this kind of support.

if you read all you will notice that the start page of www.proxmox.com is promoting the Proxmox Mail Gateway and the free support is for the mail gateway.

So if you buy a Mail Gateway license, the support is included.

For Proxmox VE, there is a different model as its GPL software:
http://pve.proxmox.com/wiki/Get_support
 
Being told, "I don't know, I don't feel like looking into it, go jump off a cliff." is getting kinda old.

You simply provide no useful information to further debug it. Also, the information you provide is confused - the initial problem was about configuring vlan inside a VM. Now it seems you try to configure the vlan on the host?

Also, you write that it does not work - but you did not mention what exactly does not work? Do you get any error messages? How do you test that it does not work, ....
 
I have since then installed CentOS 5.4 and Debian 5.03 from DVD.

We use exaclty the same packages as debian, so the only differenct seems to be the kernel. Please can you test with the standard debian installation but with the proxmox kernel?
 
You simply provide no useful information to further debug it. Also, the information you provide is confused - the initial problem was about configuring vlan inside a VM. Now it seems you try to configure the vlan on the host?

Also, you write that it does not work - but you did not mention what exactly does not work? Do you get any error messages? How do you test that it does not work, ....

I'm not sure what you need to know.

Well, I'm grasping for a solution to the problem, so I'm trying different things. In another thread I was trying to create it as I am now.

There are no error messages. I attempt to ping an IP address assigned to the interface. When using straight Debian, the pings came back. When using ProxMox, they time out.

We use exaclty the same packages as debian, so the only differenct seems to be the kernel. Please can you test with the standard debian installation but with the proxmox kernel?
I will try that now.
 
I followed the instructions in http://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Lenny I chose the PVE kernel on reboot and not it doesn't recognize the network interfaces.

I rebooted into the stock kernel and thought maybe I needed to install the kernel headers, but got this error:

debian:~# aptitude install pve-headers-2.6.24-8-pve
Reading package lists... Done
Building dependency tree
Reading state information... Done
Reading extended state information
Initializing package states... Done
Reading task descriptions... Done
The following partially installed packages will be configured:
pve-headers-2.6.24-8-pve
0 packages upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
Need to get 0B of archives. After unpacking 0B will be used.
Writing extended state information... Done
Setting up pve-headers-2.6.24-8-pve (2.6.24-16) ...
ln: creating symbolic link `/lib/modules/2.6.24-8-pve/build': No such file or directory
dpkg: error processing pve-headers-2.6.24-8-pve (--configure):
subprocess post-installation script returned error exit status 1
Errors were encountered while processing:
pve-headers-2.6.24-8-pve
E: Sub-process /usr/bin/dpkg returned an error code (1)
A package failed to install. Trying to recover:
Setting up pve-headers-2.6.24-8-pve (2.6.24-16) ...
ln: creating symbolic link `/lib/modules/2.6.24-8-pve/build': No such file or directory
dpkg: error processing pve-headers-2.6.24-8-pve (--configure):
subprocess post-installation script returned error exit status 1
Errors were encountered while processing:
pve-headers-2.6.24-8-pve
Reading package lists... Done
Building dependency tree
Reading state information... Done
Reading extended state information
Initializing package states... Done
Reading task descriptions... Done
 
Hi,
i don't have the same Ethernetcontroller, but also a nvidia:
Code:
lspci -v
00:10.0 Bridge: nVidia Corporation MCP55 Ethernet (rev a2)
    Subsystem: ASUSTeK Computer Inc. Device cb84
    Flags: bus master, 66MHz, fast devsel, latency 0, IRQ 2300
    Memory at fe02a000 (32-bit, non-prefetchable) [size=4K]
    I/O ports at b400 [size=8]
    Memory at fe029000 (32-bit, non-prefetchable) [size=256]
    Memory at fe028000 (32-bit, non-prefetchable) [size=16]
    Capabilities: [44] Power Management version 2
    Capabilities: [70] MSI-X: Enable- Mask- TabSize=8
    Capabilities: [50] Message Signalled Interrupts: Mask+ 64bit+ Queue=0/3 Enable+
    Capabilities: [6c] HyperTransport: MSI Mapping Enable+ Fixed+
    Kernel driver in use: forcedeth
    Kernel modules: forcedeth
vlans work well for us, but not inside the client. We use it on hostside to install some vmbr (1,2,3...). This vmbr's are used inside the clients.
This is a part of our /etc/network/interfaces from the host:
Code:
auto eth1
iface eth1 inet static
    address  0.0.0.0
    netmask  0.0.0.0

auto eth1.20
iface eth1.20 inet static
        address  0.0.0.0
        netmask  0.0.0.0

auto eth1.199
iface eth1.199 inet static
    address  0.0.0.0
    netmask  0.0.0.0

auto eth1.90
iface eth1.90 inet static
    address  0.0.0.0
    netmask  0.0.0.0

auto eth1.99
iface eth1.99 inet static
    address  0.0.0.0
    netmask  0.0.0.0

auto vmbr1
iface vmbr1 inet manual
    bridge_ports eth1.20
    bridge_stp off
    bridge_fd 0

auto vmbr2
iface vmbr2 inet manual
    bridge_ports eth1.90
    bridge_stp off
    bridge_fd 0

dmesg show's following info:
Code:
vlan_check_real_dev: ALREADY had VLAN registered
device eth1.20 entered promiscuous mode
audit(1258540719.182:3): dev=eth1.20 prom=256 old_prom=0 auid=4294967295
device eth1 entered promiscuous mode
audit(1258540719.182:4): dev=eth1 prom=256 old_prom=0 auid=4294967295
vmbr1: port 1(eth1.20) entering learning state
vmbr1: topology change detected, propagating
vmbr1: port 1(eth1.20) entering forwarding state
vmbr2: Dropping NETIF_F_UFO since no NETIF_F_HW_CSUM feature.
vlan_check_real_dev: ALREADY had VLAN registered
device eth1.90 entered promiscuous mode
audit(1258540720.314:5): dev=eth1.90 prom=256 old_prom=0 auid=4294967295
vmbr2: port 1(eth1.90) entering learning state
vmbr2: topology change detected, propagating
vmbr2: port 1(eth1.90) entering forwarding state
vmbr3: Dropping NETIF_F_UFO since no NETIF_F_HW_CSUM feature.
vlan_check_real_dev: ALREADY had VLAN registered

Perhaps it's helps to discover the problem.

Udo
 
After a bunch of getting involved with Linux's internals...

I have VLANs working properly when the PVE kernel is installed into a regular Debian installation. They are recognized in the ProxMox web interface. There are plenty of misc issues with this installation, so I don't feel comfortable running this production, but it's at least possible.

The problem with my NIC was because switching kernels moved my onboard NIC from eth0 to eth1 and because eth1 wasn't setup, it wasn't showing. I noticed in dmesg that both the Intel addin card and the NVidia interface were setup on eth3 and eth1, respectively. I had to set them up in /etc/network/interfaces.

I'll now take that Intel card out and put my system back together and put it back in the rack, since I know the NVidia works.


Now to figure out why it doesn't work directly off the ProxMox ISO installer.
 
I reinstalled the system from the ProxMox bare metal ISO.

The first two quotes are from the host.

Fenix:/etc/network# ifconfig
eth0 Link encap:Ethernet HWaddr 00:23:54:c1:53:fa
inet6 addr: fe80::223:54ff:fec1:53fa/64 Scope:Link
UP BROADCAST RUNNING PROMISC MULTICAST MTU:1500 Metric:1
RX packets:6480 errors:0 dropped:0 overruns:0 frame:0
TX packets:1576 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:467142 (456.1 KiB) TX bytes:148446 (144.9 KiB)
Interrupt:248 Base address:0x8000

eth0.2 Link encap:Ethernet HWaddr 00:23:54:c1:53:fa
inet6 addr: fe80::223:54ff:fec1:53fa/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:6 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:492 (492.0 B)

lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:12 errors:0 dropped:0 overruns:0 frame:0
TX packets:12 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:960 (960.0 B) TX bytes:960 (960.0 B)

venet0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00
UP BROADCAST POINTOPOINT RUNNING NOARP MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)

vmbr0 Link encap:Ethernet HWaddr 00:23:54:c1:53:fa
inet addr:10.1.5.7 Bcast:10.1.5.255 Mask:255.255.255.0
inet6 addr: fe80::223:54ff:fec1:53fa/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:6464 errors:0 dropped:0 overruns:0 frame:0
TX packets:1564 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:371894 (363.1 KiB) TX bytes:139228 (135.9 KiB)

vmbr1 Link encap:Ethernet HWaddr 00:23:54:c1:53:fa
inet6 addr: fe80::223:54ff:fec1:53fa/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:6 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:468 (468.0 B)
Fenix:/etc/network# cat /etc/network/interfaces
auto lo
iface lo inet loopback

auto vmbr0
iface vmbr0 inet static
address 10.1.5.7
netmask 255.255.255.0
gateway 10.1.5.1
bridge_ports eth0
bridge_stp off
bridge_fd 0

auto eth0.2
iface eth0.2 inet static
address 0.0.0.0
netmask 0.0.0.0

auto vmbr1
iface vmbr1 inet manual
bridge_ports eth0.2
bridge_stp off
bidge_fd 0

The second two are from the container.

VLANtest:/# ifconfig
eth0 Link encap:Ethernet HWaddr 22:68:06:56:56:4e
inet addr:65.182.165.39 Bcast:65.182.165.255 Mask:255.255.254.0
inet6 addr: fe80::2068:6ff:fe56:564e/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:5 errors:0 dropped:0 overruns:0 frame:0
TX packets:6 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:308 (308.0 B) TX bytes:384 (384.0 B)

lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)

VLANtest:/# cat /etc/network/interfaces
auto lo
iface lo inet loopback

allow-hotplug eth0
auto eth0
iface eth0 inet static
address 65.182.165.39
netmask 255.255.254.0
broadcast 65.182.165.255
gateway 65.182.164.1
dns-nameservers 65.182.165.30

It could ping out to the public Internet for a while. As I did more tests to make sure everything was flowing right, it stopped and hasn't worked again since. However, the traffic was leaving out the primary interface instead of the VLAN.
 
from dmesg, with seemingly irrelevant sections removed.:

forcedeth: Reverse Engineered nForce ethernet driver. Version 0.61.
forcedeth 0000:00:0a.0: ifname eth0, PHY OUI 0x732 @ 3, addr 00:23:54:c1:53:fa
forcedeth 0000:00:0a.0: highdma pwrctl mgmt timirq gbit lnktim msi desc-v3
Bridge firewalling registered
vmbr0: Dropping NETIF_F_UFO since no NETIF_F_HW_CSUM feature.
device eth0 entered promiscuous mode
audit(1260061822.450:2): dev=eth0 prom=256 old_prom=0 auid=4294967295
vmbr0: port 1(eth0) entering learning state
vmbr0: topology change detected, propagating
vmbr0: port 1(eth0) entering forwarding state
 
from dmesg, with seemingly irrelevant sections removed.:

Sorry, but I can't follow what you are doing - The suggestion was to test with another network card. But seems you still test with the old/internal card? (You told you will test a standard debian with proxmox kernel?)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!