Kernel error with network.

Nemesiz

Renowned Member
Jan 16, 2009
729
56
93
Lithuania
I tried to start openvz container and i got an error:

Jun 4 20:48:26 vm kernel: CT: 113: started
Jun 4 20:48:27 vm kernel: Unable to handle kernel paging request at 00000000001ffff7 RIP:
Jun 4 20:48:27 vm kernel: [<ffffffff882669d6>] :8021q:vlan_device_event+0x66/0x430
Jun 4 20:48:27 vm kernel: PGD 169827067 PUD 14f022067 PMD 0
Jun 4 20:48:27 vm kernel: Oops: 0000 [1] PREEMPT SMP
Jun 4 20:48:27 vm kernel: CPU: 2
Jun 4 20:48:27 vm kernel: Modules linked in: ipt_MASQUERADE xt_mark xt_connmark xt_state iptable_nat nf_nat nf_conntrack_ipv4 nf_conntrack kvm_intel kvm vzethdev vznetdev simfs vzrst vzcpt tun vzdquota vzmon vzdev xt_tcpudp xt_length ipt_ttl xt_tcpmss xt_TCPMSS iptable_mangle iptable_filter xt_multiport xt_limit ipt_tos ipt_REJECT ip_tables x_tables bridge 8021q macvlan ipv6 fuse coretemp it87 hwmon_vid psmouse evdev pcspkr parport_pc parport serio_raw i2c_i801 i2c_core button intel_agp dm_mirror dm_snapshot dm_mod pata_jmicron pata_acpi 8139too sd_mod ata_generic ahci ohci1394 ehci_hcd skge ieee1394 8139cp mii libata uhci_hcd sky2 scsi_mod usbcore thermal processor fan
Jun 4 20:48:27 vm kernel: Pid: 15183, comm: vzctl Not tainted 2.6.24-11-pve #1 ovz005
Jun 4 20:48:27 vm kernel: RIP: 0010:[<ffffffff882669d6>] [<ffffffff882669d6>] :8021q:vlan_device_event+0x66/0x430
Jun 4 20:48:27 vm kernel: RSP: 0018:ffff810155945c78 EFLAGS: 00010206
Jun 4 20:48:27 vm kernel: RAX: 00000000001fffff RBX: 0000000000000000 RCX: 0000000000000003
Jun 4 20:48:27 vm kernel: RDX: 00000000001fffff RSI: 0000000000000005 RDI: ffff810155945cb8
Jun 4 20:48:27 vm kernel: RBP: 00000000fffffff2 R08: ffff810225468000 R09: ffff8101e4c23e40
Jun 4 20:48:27 vm kernel: R10: 0000000000000000 R11: 000000003b14bdb9 R12: ffffffff8827fa80
Jun 4 20:48:27 vm kernel: R13: ffff810222849200 R14: 0000000000000005 R15: ffffffff88351680
Jun 4 20:48:27 vm kernel: FS: 00007f8b271cc6e0(0000) GS:ffff810227002c00(0000) knlGS:0000000000000000
Jun 4 20:48:27 vm kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b
Jun 4 20:48:27 vm kernel: CR2: 00000000001ffff7 CR3: 0000000205197000 CR4: 00000000000026e0
Jun 4 20:48:27 vm kernel: DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
Jun 4 20:48:27 vm kernel: DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
Jun 4 20:48:27 vm kernel: Process vzctl (pid: 15183, veid=0, threadinfo ffff810155944000, task ffff81015f03e000)
Jun 4 20:48:27 vm kernel: Stack: 0000000000000000 ffff8101fbce6000 ffff810157921c00 ffff8101fbce6000
Jun 4 20:48:27 vm kernel: 00000000ffffffff ffffffff80445ad3 0000000000000000 ffffffff804ba0ac
Jun 4 20:48:27 vm kernel: 00000000001fffff 0000000000000000 00000000fffffff2 ffffffff8827fa80
Jun 4 20:48:27 vm kernel: Call Trace:
Jun 4 20:48:27 vm kernel: [<ffffffff80445ad3>] rtmsg_ifinfo+0xe3/0x1b0
Jun 4 20:48:27 vm kernel: [<ffffffff804ba0ac>] packet_notifier+0xbc/0x200
Jun 4 20:48:27 vm kernel: [<ffffffff804cc447>] notifier_call_chain+0x37/0x70
Jun 4 20:48:27 vm kernel: [<ffffffff8043bfb5>] register_netdevice+0x355/0x490
Jun 4 20:48:27 vm kernel: [<ffffffff8043c131>] register_netdev+0x41/0x60
Jun 4 20:48:27 vm kernel: [<ffffffff8834e3dc>] :vzethdev:veth_dev_start+0xcc/0x130
Jun 4 20:48:27 vm kernel: [<ffffffff8834eaa3>] :vzethdev:veth_entry_add+0xe3/0x240
Jun 4 20:48:27 vm kernel: [<ffffffff8834eff0>] :vzethdev:real_ve_hwaddr+0x210/0x240
Jun 4 20:48:27 vm kernel: [<ffffffff8834f08c>] :vzethdev:veth_ioctl+0x6c/0x70
Jun 4 20:48:27 vm kernel: [<ffffffff882c9241>] :vzdev:vzctl_ioctl+0x51/0x7c
Jun 4 20:48:27 vm kernel: [<ffffffff802e1a1f>] do_ioctl+0x2f/0xb0
Jun 4 20:48:27 vm kernel: [<ffffffff802e1d2b>] vfs_ioctl+0x28b/0x300
Jun 4 20:48:27 vm kernel: [<ffffffff802d2b4c>] vfs_write+0x12c/0x190
Jun 4 20:48:27 vm kernel: [<ffffffff802e1de9>] sys_ioctl+0x49/0x80
Jun 4 20:48:27 vm kernel: [<ffffffff8020c68e>] system_call+0x7e/0x83
Jun 4 20:48:27 vm kernel:
Jun 4 20:48:27 vm kernel:
Jun 4 20:48:27 vm kernel: Code: 3b 48 f8 48 8b 10 4c 8d 68 f8 0f 18 0a 75 db 4d 3b 45 68 75
Jun 4 20:48:27 vm kernel: RIP [<ffffffff882669d6>] :8021q:vlan_device_event+0x66/0x430
Jun 4 20:48:27 vm kernel: RSP <ffff810155945c78>
Jun 4 20:48:27 vm kernel: CR2: 00000000001ffff7
Jun 4 20:48:27 vm kernel: ---[ end trace d9744d7d77240bdd ]---

My network looks live this:

ISP -> eth0 (217.x.x.x) -> iptables MASQUERADE -> ( vmbr0 (10.10.10.1) -> vethx )

and virtual network for testing: ip link add link eth0 name eth0.1 address 00:aa:bb:cc:dd:dd type macvlan

pve-manager: 1.5-9 (pve-manager/1.5/4728)
running kernel: 2.6.24-11-pve
proxmox-ve-2.6.24: 1.5-23
pve-kernel-2.6.24-11-pve: 2.6.24-23
qemu-server: 1.1-14
pve-firmware: 1.0-5
libpve-storage-perl: 1.0-13
vncterm: 0.9-2
vzctl: 3.0.23-1pve11
vzdump: 1.2-6
vzprocps: 2.0.11-1dso2
vzquota: 3.0.11-1
pve-qemu-kvm: 0.12.4-1
 
I tried to do on my computer but with no success.

I doesn't want to try on main server but I can give you some details

# lspci
00:00.0 Host bridge: Intel Corporation 82P965/G965 Memory Controller Hub (rev 02)
00:02.0 VGA compatible controller: Intel Corporation 82G965 Integrated Graphics Controller (rev 02)
00:1a.0 USB Controller: Intel Corporation 82801H (ICH8 Family) USB UHCI Controller #4 (rev 02)
00:1a.1 USB Controller: Intel Corporation 82801H (ICH8 Family) USB UHCI Controller #5 (rev 02)
00:1a.7 USB Controller: Intel Corporation 82801H (ICH8 Family) USB2 EHCI Controller #2 (rev 02)
00:1c.0 PCI bridge: Intel Corporation 82801H (ICH8 Family) PCI Express Port 1 (rev 02)
00:1c.1 PCI bridge: Intel Corporation 82801H (ICH8 Family) PCI Express Port 2 (rev 02)
00:1c.2 PCI bridge: Intel Corporation 82801H (ICH8 Family) PCI Express Port 3 (rev 02)
00:1d.0 USB Controller: Intel Corporation 82801H (ICH8 Family) USB UHCI Controller #1 (rev 02)
00:1d.1 USB Controller: Intel Corporation 82801H (ICH8 Family) USB UHCI Controller #2 (rev 02)
00:1d.2 USB Controller: Intel Corporation 82801H (ICH8 Family) USB UHCI Controller #3 (rev 02)
00:1d.7 USB Controller: Intel Corporation 82801H (ICH8 Family) USB2 EHCI Controller #1 (rev 02)
00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev f2)
00:1f.0 ISA bridge: Intel Corporation 82801HB/HR (ICH8/R) LPC Interface Controller (rev 02)
00:1f.2 SATA controller: Intel Corporation 82801HB (ICH8) 4 port SATA AHCI Controller (rev 02)
00:1f.3 SMBus: Intel Corporation 82801H (ICH8 Family) SMBus Controller (rev 02)
02:00.0 Ethernet controller: Marvell Technology Group Ltd. 88E8056 PCI-E Gigabit Ethernet Controller (rev 13)
03:00.0 SATA controller: JMicron Technologies, Inc. JMicron 20360/20363 AHCI Controller (rev 02)
03:00.1 IDE interface: JMicron Technologies, Inc. JMicron 20360/20363 AHCI Controller (rev 02)
04:00.0 Ethernet controller: D-Link System Inc DGE-530T Gigabit Ethernet Adapter (rev 11) (rev 11)
04:01.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL-8139/8139C/8139C+ (rev 10)
04:07.0 FireWire (IEEE 1394): Texas Instruments TSB43AB23 IEEE-1394a-2000 Controller (PHY/Link)

# lsmod
Module Size Used by
kvm_intel 58568 0
kvm 207496 1 kvm_intel
ipt_MASQUERADE 11264 1
xt_mark 11136 12
xt_connmark 11520 3
xt_state 11264 53
iptable_nat 19460 1
nf_nat 31632 2 ipt_MASQUERADE,iptable_nat
nf_conntrack_ipv4 36496 58 iptable_nat
nf_conntrack 102880 5 xt_connmark,xt_state,iptable_nat,nf_nat,nf_conntrack_ipv4
vzethdev 23552 0
vznetdev 32904 6
simfs 14064 6
vzrst 158248 0
vzcpt 131256 0
tun 23168 2 vzrst,vzcpt
vzdquota 60016 6 [permanent]
vzmon 58008 10 vzethdev,vznetdev,vzrst,vzcpt
vzdev 12808 4 vzethdev,vznetdev,vzdquota,vzmon
xt_tcpudp 12160 41
xt_length 10752 0
ipt_ttl 10624 0
xt_tcpmss 11008 0
xt_TCPMSS 13440 0
iptable_mangle 13824 7
iptable_filter 13568 7
xt_multiport 12160 0
xt_limit 11904 0
ipt_tos 10368 0
ipt_REJECT 13824 2
ip_tables 33384 3 iptable_nat,iptable_mangle,iptable_filter
x_tables 34056 15 ipt_MASQUERADE,xt_mark,xt_connmark,xt_state,iptable_nat,xt_tcpudp,xt_length,ipt_ttl,xt_tcpmss,xt_TCPMSS,xt_multiport,xt_limit,ipt_tos,ipt_REJECT,ip_tables
8021q 37632 6
bridge 75304 0
macvlan 18048 0
ipv6 350592 1760 vzrst,vzcpt,vzmon
fuse 65808 13
coretemp 17152 0
it87 34448 0
hwmon_vid 12416 1 it87
i2c_i801 19484 0
psmouse 54684 0
pcspkr 12160 0
parport_pc 48168 0
parport 53388 1 parport_pc
i2c_core 36480 1 i2c_i801
evdev 22912 0
serio_raw 16388 0
intel_agp 38304 1
button 18080 0
dm_mirror 34816 0
dm_snapshot 27744 0
dm_mod 80248 9 dm_mirror,dm_snapshot
pata_jmicron 15744 0
8139too 38656 0
sd_mod 41088 3
pata_acpi 17152 0
ata_generic 17156 0
ahci 41348 2
ohci1394 44212 0
8139cp 35200 0
libata 189616 4 pata_jmicron,pata_acpi,ata_generic,ahci
skge 56080 0
ehci_hcd 50188 0
uhci_hcd 38176 0
mii 14976 2 8139too,8139cp
ieee1394 114896 1 ohci1394
scsi_mod 189752 2 sd_mod,libata
sky2 62980 0
usbcore 180912 3 ehci_hcd,uhci_hcd
thermal 26912 0
processor 49096 1 thermal
fan 13960 0
 
2.6.18-2-pve

/sbin/ip link add link eth0 name eth0.1 address 00:aa:bb:cc:dd:dd type macvlan
Command "add" is unknown, try "ip link help".

Warning: Your system uses an old version of the Linux kernel.

Due to a bug in the Linux kernel, your system may stop responding when writing data to a TrueCrypt volume. This problem can be solved by upgrading the kernel to version 2.6.24 or later.
 
Sorry, just reread your test. Why do you use macvlan?

ip link add link eth0 name eth0.1 address 00:aa:bb:cc:dd:dd type macvlan

AFAIK, macvlan is not a normal VLAN. Why don't you use the standard debian tools to setup vlan.
 
Its easy to assign IP address inside VPS. But then you want to assign IP to host its became complicated because ISP gives IP to unique MAC address
 
Its easy to assign IP address inside VPS. But then you want to assign IP to host its became complicated because ISP gives IP to unique MAC address

Sorry, I can't follow you. AFAIK the macvlan module is highly experimental and only work for newer kernels. So it make no sense to use that.
 
ip link with macvlan gives access to 'virtualize' nic (packets doesn't come to wrong mac address) I tried with vethd but it captures all packets coming to eth0. OpenVZ veth its stable i think. Maybe try to use it to virtualize nic connected with eth0 ?
 
Please contact the OpenVZ team if you want macvlan with openvz - Although macvlan has some serious limitations, i.e. you can't communicate to containers on the same host. Instead, you can simply use a bridge and veth.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!