VLANs inside VE

Sorry, but I can't follow what you are doing - The suggestion was to test with another network card. But seems you still test with the old/internal card? (You told you will test a standard debian with proxmox kernel?)

I'll start from the top.

  1. I was having problems with bridging VLANs in PVE's web GUI to containers
  2. I tried creating VLANs inside the container.
  3. I installed CentOS and could create VLANs just fine.
  4. I installed Debian Lenny and VLANs worked just fine.
  5. I installed the PVE kernel and created the VLANs directly in the config files.
  6. The network card wasn't recognized, so this thread recommended a different network card.
  7. They still didn't work, so I did more digging and discovered that eth0 in Debian was now eth1 in PVE and the new network card was eth3, but because I hadn't configured eth1 or eth3 in ifconfig, they didn't initially show up.
  8. I believe VLANs were properly working on both interface cards, so I removed the Intel card.
  9. I had the VLAN pinging out, but further investigation showed the traffic coming out of the base interface as the host's IP instead of the container's IP.
  10. The container no longer can ping, but I didn't change anything.

Maybe it's because I'm the one that did everything, but I can track what I did in the previous messages I posted.
 
So maybe you should go back to point 8 and retest with the intel card?

Debian 5.03 with PVE kernel and kernel headers, no other PVE packages. Disabled onboard NIC, Intel NIC present.

debian:~# ifconfig
eth1 Link encap:Ethernet HWaddr 00:90:27:5a:d1:8e
inet addr:10.1.5.7 Bcast:10.1.5.255 Mask:255.255.255.0
inet6 addr: fe80::290:27ff:fe5a:d18e/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:12620 errors:0 dropped:0 overruns:0 frame:0
TX packets:5493 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:9432255 (8.9 MiB) TX bytes:479913 (468.6 KiB)

eth1.2 Link encap:Ethernet HWaddr 00:90:27:5a:d1:8e
inet addr:65.182.165.39 Bcast:65.182.165.255 Mask:255.255.254.0
inet6 addr: fe80::290:27ff:fe5a:d18e/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:742 errors:0 dropped:0 overruns:0 frame:0
TX packets:117 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:48160 (47.0 KiB) TX bytes:8130 (7.9 KiB)

lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)

debian:~# uname -a
Linux debian 2.6.24-9-pve #1 SMP PREEMPT Tue Nov 17 09:34:41 CET 2009 x86_64 GNU/Linux

Pings from public Internet showing that it works fine:

[mhammett@ds00209 ~]$ ping 65.182.165.39
PING 65.182.165.39 (65.182.165.39) 56(84) bytes of data.
64 bytes from 65.182.165.39: icmp_seq=1 ttl=56 time=22.0 ms
64 bytes from 65.182.165.39: icmp_seq=2 ttl=56 time=26.1 ms
64 bytes from 65.182.165.39: icmp_seq=3 ttl=56 time=24.2 ms
64 bytes from 65.182.165.39: icmp_seq=4 ttl=56 time=23.4 ms
64 bytes from 65.182.165.39: icmp_seq=5 ttl=56 time=23.5 ms
64 bytes from 65.182.165.39: icmp_seq=6 ttl=56 time=22.7 ms
64 bytes from 65.182.165.39: icmp_seq=7 ttl=56 time=22.1 ms

--- 65.182.165.39 ping statistics ---
8 packets transmitted, 7 received, 12% packet loss, time 7006ms
rtt min/avg/max/mdev = 22.067/23.488/26.122/1.302 ms, pipe 2
[mhammett@ds00209 ~]$

Since #8 showed that VLANs were working fine on both interfaces, I'll leave this up for a couple hours to make sure that it stays working. If I'm still around, I'll post again if this works fine with just the onboard NIC.
 
This got put on the back burner for a bit. I reinstalled with the latest kernel, etc. and am trying to get it to work again. I commented on another thread before realizing I had this one still.

How would I check why it isn't processing interfaces.new upon reboot? I didn't recognize anything in /var/log/messages or /var/log/dmesg
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!