vlan creation failed!

cyruspy

Renowned Member
Jul 2, 2013
66
2
73
Hi, I'm running PVE 3.0 and the VLAN creation fails at VM boot complaining that it already exists. I check before it's start and it doesn't exists, but after failed start the vlan is there!, so it seems to try to create it two times, anybody has seen this?

eth0 \
eth1 -- bond0 -- vmbr0
eth2 --
eth3 /

The device created is related to bond0 and not to vmbr0 which is what I assign to the machine.
 
Error:

Added VLAN with VID == 100 to IF -:bond0:-
Cannot find device "bond0.100"
can't up interface bond0.100
/var/lib/qemu-server/pve-bridge: could not launch network script
kvm: -netdev type=tap,id=net0,ifname=tap100i0,script=/var/lib/qemu-server/pve-bridge,vhost=on: Device 'tap' could not be initialized
TASK ERROR: start failed: command '/usr/bin/kvm -id 100 -chardev 'socket,id=qmp,path=/var/run/qemu-server/100.qmp,server,nowait' -mon 'chardev=qmp,mode=control' -vnc unix:/var/run/qemu-server/100.vnc,x509,password -pidfile /var/run/qemu-server/100.pid -daemonize -name mysql02 -smp 'sockets=1,cores=2' -nodefaults -boot 'menu=on' -vga cirrus -k en-us -m 4096 -device 'piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2' -device 'usb-tablet,id=tablet,bus=uhci.0,port=1' -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3' -drive 'file=/var/lib/vz/template/iso/CentOS-6.4-x86_64-bin-DVD1.iso,if=none,id=drive-ide2,media=cdrom,aio=native' -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200' -drive 'file=/var/lib/vz/images/100/vm-100-disk-1.qcow2,if=none,id=drive-virtio0,format=qcow2,aio=native,cache=none' -device 'virtio-blk-pci,drive=drive-virtio0,id=virtio0,bus=pci.0,addr=0xa,bootindex=100' -netdev 'type=tap,id=net0,ifname=tap100i0,script=/var/lib/qemu-server/pve-bridge,vhost=on' -device 'virtio-net-pci,mac=02:43:23:A0:9E:1E,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=300'' failed: exit code 1


Configuration:

auto loiface lo inet loopback


#iface eth0 inet manual


#iface eth1 inet manual


#iface eth2 inet manual


#iface eth3 inet manual


auto bond0
iface bond0 inet manual
slaves eth0 eth1 eth2 eth3
bond_miimon 100
bond_mode 802.3ad


auto vlan20
iface vlan20 inet manual
vlan-raw-device bond0


auto vmbr0
iface vmbr0 inet manual
bridge_ports bond0
bridge_stp off
bridge_fd 0




auto br20
iface br20 inet static
address 10.1.20.14
netmask 255.255.255.0
gateway 10.1.20.1
bridge_ports vlan20
bridge_stp off
bridge_fd 0


Version:

pve-manager: 3.0-23 (pve-manager/3.0/957f0862)running kernel: 2.6.32-20-pve
proxmox-ve-2.6.32: 3.0-100
pve-kernel-2.6.32-20-pve: 2.6.32-100
lvm2: 2.02.95-pve3
clvm: 2.02.95-pve3
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.0-1
pve-cluster: 3.0-4
qemu-server: 3.0-20
pve-firmware: 1.0-22
libpve-common-perl: 3.0-4
libpve-access-control: 3.0-4
libpve-storage-perl: 3.0-8
vncterm: 1.1-4
vzctl: 4.0-1pve3
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.4-13
ksm-control-daemon: 1.1-1
 
Last edited:
Looking at the documentation, you don't have the vlan aliases or vlan specific bridges in your /etc/network/interfaces configuration:
http://pve.proxmox.com/wiki/Vlans

This doc is outdated, you don't need to create vlan interfaces in /etc/network/interfaces anymore.

Proxmox do the job for you.
 
For the time being, had to split the bond and separate management traffic on a access mode port. The traffic works if I don't mix management with VMs bridge+trunking.

This is the VM configuration:

balloon: 512bootdisk: virtio0
cores: 2
ide2: none,media=cdrom
memory: 4096
name: mysql02
net0: virtio=02:43:23:A0:9E:1E,bridge=vmbr1,tag=30
net1: virtio=46:D0:26:AA:62:40,bridge=vmbr1,tag=80
onboot: 1
ostype: l26
sockets: 1
virtio0: local:100/vm-100-disk-1.qcow2,format=qcow2,size=20G
virtio1: local:100/vm-100-disk-2.qcow2,format=qcow2,size=40G
 
ok, so why your vm config file use vmbr1 ?

net0: virtio=02:43:23:A0:9E:1E,bridge=vmbr1,tag=30

it should be

net0: virtio=02:43:23:A0:9E:1E,bridge=vmbr0,tag=30

Hi!, sorry for the delay. I had to split the bond and used vmbr0 and vmbr1 associated to each bond in order to go live. I posted the final configuration after making the changes, sorry about that.

Currently I can't test that configuration again. It works but won't be able to use all the bandwidth available as expected
 
The problem lies with the declaration of the VLAN interface:

auto vlan20
iface vlan20 inet manual
vlan-raw-device bond0

if you change it to a vmbr20 with bridge_ports bond0.20 it will allow you to add VLANs in your VM configs.

I know this is a really late response, but it is what fixed my issue.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!