IPv6 bridging issue with latest pve-kernel-3.10.0-7-pve

lopar

New Member
Feb 16, 2015
4
0
1
France
(Whoops, just saw the dedicated PVE Networking forum section, sorry for the misplaced post…)

Hi Proxmox forums!

For my first post here, I'd like to submit you to a very special issue (bug?) I had on two separate proxmox servers with pve-no-subscription repository.

First server is an online.net Dedibox XC with their custom PVE installation (nothing more than a base debian w/ proxmox installed afterwards, really) and second server is a laptop w/ a fresh install of PVE 3.3.
I compared both servers on dpkg level (dpkg -l) and got rather identical results, so I consider them both valid proxmox installations.

All VM are configured with virtio devices (both block and net), running 2 cores on cpu=host.

Last friday, I apt-get distup both of them and decided to reboot in order to ensure running latest pve-kernel and VM running latest qemu-kvm code.
All seemed right, except that 5 minutes after bootup, all low network activity VM were unreachable using IPv6. IPv4 was OK.

Tcpdump inside VM reveals nothing except ICMPv6 messages, w/o responses messages from either sollicited neighbors nor routers.
Tcpdumping the vmbr or the tap interfaces reveals nothing more.

Here is some info on the culprit kernel :

Code:
Package: proxmox-ve-2.6.32
Priority: optional
Section: admin
Maintainer: Proxmox Support Team <support@proxmox.com>
Architecture: all
Version: 3.3-147
Replaces: proxmox-ve, pve-kernel, proxmox-virtual-environment
Provides: proxmox-virtual-environment
Depends: libc6 (>= 2.7-18), pve-kernel-2.6.32-37-pve, pve-firmware, pve-manager, qemu-server, pve-qemu-kvm, openssh-client, openssh-server, apt, vncterm, vzctl (>= 3.0.29)
Please note that both kernels pve-kernel-2.6.32-34-pve and pve-kernel-3.10.0-7-pve do not show this faulty behavior. IPv6 bridging is OK with both of them.

I kept updating my systems during the week-end, since some packages had several updates (pve-qemu-kvm) and rebooting afterwards. Nothing was fixed with pve-kernel-2.6.32-37-pve.
I decided to run on pve-kernel-3.10.0-7-pve, since I don't need OpenVZ containers.

Thanks for reading through this!
 
Last edited:
Hi,

I have the same kind of issues: VMs (and hypervisor itself) does not reply to IPv6 NDP.
I would love to upgrade to 3.x kernel, however it seems that this kernel has issues with HP SmartArray controllers (and of course, that's the kind of hw I am running).

Do you have any open bug for that kind of problems ?
I don't really care using such an old kernel, and I don't care about upgrading to 3.10 either, but I'd appreciate to have at least one valid solution :)

Help !
 
Hi,
where is your server hosted?
and what Hardware you are using?
We can't yet reproduce this Bug!
 
Hello Wolfgang,

Hi,
where is your server hosted?
These are colocated servers, not a dedicated server provider. If you'd like to have access to it, it can be dealt :)

and what Hardware you are using?

ProLiant DL380 G5 (for one cluster)
ProLiant DL360 G5 (for another cluster)

We can't yet reproduce this Bug!

I have tried to install pve-kernel-3.10.0-7-pve, but the server does not boot (seems that the smartarray is not included in that kernel, so linux cannot boot up),
so I downgraded to 2.6.32-26-pve as said on https://bugzilla.proxmox.com/show_bug.cgi?id=600. For now, this is working fine...
 
Last edited:
Hi,
I would love to upgrade to 3.x kernel, however it seems that this kernel has issues with HP SmartArray controllers (and of course, that's the kind of hw I am running).

No Bug, you must activate the driver. Here's my kernelline:
Code:
linux   /vmlinuz-3.10.0-4-pve root=/dev/mapper/pve-root hpsa.hpsa_simple_mode=1 hpsa.hpsa_allow_any=1 ro  quiet