New 3.10.0 Kernel

badji

Renowned Member
Jan 14, 2011
235
32
93
HI,
I just deployed a cluster with the new kernel 3.10. on 3 differents computers. it works well

I have a error message : proxmox-ve-2.6.32: not correctly installed (running kernel: 3.10.0-1-pve)

root@pserver3:/home/moula# pveversion -v
proxmox-ve-2.6.32: not correctly installed (running kernel: 3.10.0-1-pve)
pve-manager: 3.1-27 (running version: 3.1-27/e5eff110)
pve-kernel-3.10.0-1-pve: 3.10.0-1
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.0-2
pve-cluster: 3.0-9
qemu-server: 3.1-12
pve-firmware: 1.1-1
libpve-common-perl: 3.0-10
libpve-access-control: 3.0-10
libpve-storage-perl: 3.0-18
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-6
vzctl: 4.0-1pve4
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.7-3
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.1-1

I will continue my tests.

Thank's
 
If you follow git repository, you see a LOT of very interesting work they are doing to prepare next release, integrating 1.7 KVM, more spice, ceph management, etc etc etc. They introduced kernel 3.1 just yesterday, so I think is still a "work in progress", and I doubt they will not include drbd support (except if not compatible with kernel 3.10 or for other good reasions).
So I think is just a matter of wait, I think they will announce the availability of the new version in the pvetest repo (I thought was abandoned!) and ask for testing.
To make it short, they are doing a great job as usual!
 
First, we have to wait for the OpenVZ kernel, then we do tests with ploop. Afterwards, we decide what to do.
 
Speaking of OpenVZ. There's Dockers project, which is growing.
I also work on the Openstack project and I can tell you that the community uses it a lot.
Although it is not yet completely reliable, secure and especially live migration will come with the next version Openstack-Icehouse.
 
If I enable the pvetest repo on a (test) server will that be sufficient to upgrade it to the dev branch?
 
I'm aware of that :) I guess my wording was poor though.

What I meant - is pvetest what will eventually become the next release of proxmox?
 
pvetest is for testing only. not all packages from pvetest will be released and not all released packages are in pvetest.
 
Just a note. It seems that the kernel pve-kernel-3.10.0-1-pve: 3.10.0-5 seems to cause some trouble with my hardware. If a bridged network card is heavily used the proxmox host gets some error messages like:

[409141.701721] kvm [19430]: vcpu0 unhandled wrmsr: 0x684 data 0
[409141.702662] kvm [19430]: vcpu0 unhandled wrmsr: 0x6c4 data 0
[409143.129307] kvm: zapping shadow pages for mmio generation wraparound
[409143.129401] kvm: zapping shadow pages for mmio generation wraparound

After these messages on the host the Ubuntu guest network transfer will be stalled. I need to power off the guest to get the bridged network interface working again. The previous used kernel pve-kernel-2.6.32-26-pve: 2.6.32-114 doesn't cause these kind of problems.
 
Hello @all,
i have some trouble with the 2.6.32-114 kernel. I'm using a VDR (Video Disk Recorder) installed on physical Computer not virtual. Sometimes my reccords are stuttering or damaged.
I want to test the new kernel. Is is possible to install only the new 3.10.x kernel and without to upgrade other components?

Thanks for helping. Greetings rmfausi.
 
Just uploaded a new version with DRBD support.
Don't want to create a new thread about the same test kernel, so I post here.

I've tried to play with 3.10 kernel from current pvetest repo. I want to test nested kvm-intel feature. But it fails to boot falling back to emergency initramfs console.
It can't find pve-data volume, and I can see that the appropriate nodes in /dev aren't populated. While I can see LVs with lvm command, they aren't started by kernel/initramfs. What am I missing here?
I've also running a soft-RAID and it's not autostarted by kernel too (but the problem with LVM remains even if it's on plain /dev/sda, ie not inside raid volume), like with 2.6 kernel. I can manually assemble it with mdadm -A ..., but its not there before. And LVM nodes are missing, even lvm vgscan --mknodes doesn't creates them.

EDIT: Seems like it fails to start, because it tries to initialize LVM and RAID before AHCI devices are probed. When I've manually activate them with 'lvchange -a a pve' and Ctrl+D out of emergency shell the system finally booted. Will try to play with rootdelay kernel param later.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!