New Proxmox VE kernel branch 2.6.35 - with KSM support

martin

Proxmox Staff Member
Staff member
Apr 28, 2005
754
1,740
223
We just released a new kernel branch 2.6.35 with KSM support (no OpenVZ) to the pvetest repository. Anybody is encouraged to test and give feedback.

Proxmox VE Kernels
__________________
Best regards,
Martin Maurer
 
Last edited by a moderator:
I'm having problem with HP P4xx RAID controllers with this kernel, it seems that they are supported by hpsa (CONFIG_SCSI_HPSA) and cciss kernel drivers (CONFIG_BLK_CPQ_DA or CONFIG_BLK_CPQ_CISS_DA) and it causes race conditions during boot, when hpsa driver gets loaded first raid volumes appears as /dev/sd*, with older driver it is /dev/cciss/c*d*. I've got /dev/sd* filtered in lvm.conf because I use multipath for ISCSI so pve volume group is not accessible during the boot for me. Can You disable hpsa driver? Judging by description it does the same thing as old driver:

This driver supports HP Smart Array Controllers (circa 2009).
It is a SCSI alternative to the cciss driver, which is a block
driver. Anyone wishing to use HP Smart Array controllers who
would prefer the devices be presented to linux as SCSI devices,
rather than as generic block devices should say Y here.
 
Another HP P4XX controller issues triggered by using hpacucli (cli tools to manage P4XX)

------------[ cut here ]------------
WARNING: at drivers/pci/intel-iommu.c:2735 intel_unmap_page+0x12e/0x140()
Hardware name: ProLiant DL360 G6
Driver unmaps unmatched page at PFN 0
Modules linked in: kvm_intel kvm dlm configfs crc32c ib_iser rdma_cm ib_cm iw_cm ib_sa ib_mad ib_core ib_addr iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi bridge stp bonding ipmi_poweroff ipmi_devintf ipmi_watchdog snd_pcm snd_timer snd tpm_tis psmouse ipmi_si tpm soundcore snd_page_alloc pcspkr tpm_bios ipmi_msghandler hpilo serio_raw i7core_edac edac_core power_meter usbhid hid bnx2 e1000e cciss [last unloaded: scsi_wait_scan]
Pid: 3470, comm: .hpacucli Not tainted 2.6.35-1-pve #1
Call Trace:
[<ffffffff812f68de>] ? intel_unmap_page+0x12e/0x140
[<ffffffff81061900>] warn_slowpath_common+0x80/0xd0
[<ffffffff81061a4e>] warn_slowpath_fmt+0x6e/0x70
[<ffffffff81037d59>] ? default_spin_lock_flags+0x9/0x10
[<ffffffff812f218d>] ? find_iova+0x5d/0x90
[<ffffffff812f68de>] intel_unmap_page+0x12e/0x140
[<ffffffffa0000151>] pci_unmap_single+0x41/0x60 [cciss]
[<ffffffffa000836c>] cciss_ioctl+0x83c/0x1150 [cciss]
[<ffffffffa0008cd5>] do_ioctl+0x55/0x90 [cciss]
[<ffffffffa00090f6>] cciss_compat_ioctl+0x3e6/0x430 [cciss]
[<ffffffff812c464f>] ? vsnprintf+0x43f/0x5b0
[<ffffffff812b7ed2>] compat_blkdev_ioctl+0x172/0x1ca0
[<ffffffff8139aa20>] ? kobj_lookup+0x1a0/0x1c0
[<ffffffff812ae620>] ? exact_match+0x0/0x10
[<ffffffff815b4cdd>] ? _unlock_kernel+0x3d/0x90
[<ffffffff81185e0a>] ? __blkdev_get+0x1da/0x410
[<ffffffff81186050>] ? blkdev_get+0x10/0x20
[<ffffffff8118615a>] ? blkdev_open+0xfa/0x140
[<ffffffff811527c0>] ? __dentry_open+0x250/0x320
[<ffffffff81186060>] ? blkdev_open+0x0/0x140
[<ffffffff811529a4>] ? nameidata_to_filp+0x54/0x70
[<ffffffff81170430>] ? mntput_no_expire+0x30/0x110
[<ffffffff8119aebd>] compat_sys_ioctl+0x11d/0x1800
[<ffffffff8115efd8>] ? putname+0x38/0x50
[<ffffffff81047363>] ia32_sysret+0x0/0x5
---[ end trace fbaafa5b1582c22c ]---
 
Not sure about that. Cant you modify your lvm filter settings instead?

I can but it will cause issues, I need to filter out iscsi disk so that lvm will see only multipath device based on this disk, if I just filer out /dev/sda (hpsa device) and my raid controller will jump between /dev/sda <-> /dev/cciss/c0d0 during bootups (depending on which driver is used) I will end up with non-working filter. The only solution for me would be adding blacklist=hpsa (or cciss) to grub if You will keep hpsa but there still may be other people with the same problem.
 
I've booted into 2.6.32-4 and I'm still seeing some issues so they are not related to kernel, i'll look into it.
 
Summary:

2.6.32-4 and 2.6.35.7-1 - kernel oops while using hpacucli and controller lockups using cciss driver
2.6.18-2 - just tested, works ok, no problems with cciss driver
2.6.32-2 - used this for long time no problems at all
2.6.35.7-vanilla with custom config - no problem with cciss driver

I guess that something changed after 2.6.32-2 (in kernel config or some patch was added).
 
We just released a new kernel branch 2.6.35 with KSM support (no OpenVZ) to the pvetest repository. Anybody is encouraged to test and give feedback.

Proxmox VE Kernels
__________________
Best regards,
Martin Maurer

Hi,
with this kernel the windows-io-smp-problem is gone!! before h2benchw shows poor performance values with more than one cpu/core.
With the 2.6.35 i got the same values with one and with two cores.

Udo
 
Summary:

2.6.32-4 and 2.6.35.7-1 - kernel oops while using hpacucli and controller lockups using cciss driver
2.6.18-2 - just tested, works ok, no problems with cciss driver
2.6.32-2 - used this for long time no problems at all
2.6.35.7-vanilla with custom config - no problem with cciss driver

I guess that something changed after 2.6.32-2 (in kernel config or some patch was added).

I just did quick reinstall with pve-1.6 and I'm still getting this errors so this is problem with my hardware, probably new firmware I did installed few days ago, so please ignore this. The only issue is having both hpsa and cciss driver.
 
After;
pve-manager: 1.6-4 (pve-manager/1.6/5229)
running kernel: 2.6.35-1-pve
proxmox-ve-2.6.35: 1.6-1
pve-kernel-2.6.35-1-pve: 2.6.35-1
pve-kernel-2.6.18-2-pve: 2.6.18-5
qemu-server: 1.1-19
pve-firmware: 1.0-9
libpve-storage-perl: 1.0-14
vncterm: 0.9-2
vzctl: 3.0.24-1pve4
vzdump: 1.2-8
vzprocps: 2.0.11-1dso2
vzquota: 3.0.11-1
pve-qemu-kvm: 0.12.5-2
ksm-control-daemon: 1.0-4

OpenVZ VMs does can not be started neither could be destroyed;
/usr/sbin/vzctl start 103
Unable to open /dev/vzctl: No such file or directory
Please check that vzdev kernel module is loaded and you have sufficient permissions to access the file.
VM 103 start failed -

/usr/sbin/vzctl destroy 106
Unable to open /dev/vzctl: No such file or directory
Please check that vzdev kernel module is loaded and you have sufficient permissions to access the file.
VM 106 destroy failed -

With creation
Please check that vzdev kernel module is loaded and you have sufficient permissions to access the file,
unable to apply VM settings -

Even can not create new VM. But windows guests are running fine.
 
Sorry I am newbie but what I could not figure out is why proxmox released a kernel with no OpenVZ support? How can it be used with no OpenVZ support when they release for productional environment. Sorry just trying to understand.
 
Sorry I am newbie but what I could not figure out is why proxmox released a kernel with no OpenVZ support? How can it be used with no OpenVZ support when they release for productional environment. Sorry just trying to understand.

Because some features of KVM (eg. KSM) does not work with OpenVZ kernels. This is discussed already a lot of times, e.g.: http://forum.proxmox.com/threads/1991-Survey-Proxmox-VE-Kernel-with-or-without-OpenVZ

We have now four different kernel branches, see http://pve.proxmox.com/wiki/Proxmox_VE_Kernel
 
perfect!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!