New Kernel for Proxmox VE 3.0

martin

Proxmox Staff Member
Staff member
Apr 28, 2005
716
1,126
144
We just moved a new kernel and firmware packages from pvetest to our stable repository. The kernel includes a lot of updated drivers from RHEL64.

Release Notes

- pve-kernel-2.6.32 (2.6.32-107)

  • fix bug #420: allow grub-efi-amd64
  • update to vzkernel-2.6.32-042stab078.28.src.rpm
  • update aacraid to aacraid-1.2.1-30200.src.rpm
  • update ixgbe to 3.16.1
  • update e1000e to 2.4.14
  • update igb to 4.3.0
- pve-firmware: 1.0-23

  • update for RHEL6.4 based kernels
Everybody is adviced to upgrade.

__________________
Best regards,

Martin Maurer
Proxmox VE project leader
 

dea

Well-Known Member
Feb 6, 2009
154
33
48
We just moved a new kernel and firmware packages from pvetest to our stable repository. The kernel includes a lot of updated drivers from RHEL64.

Release Notes

- pve-kernel-2.6.32 (2.6.32-107)

  • fix bug #420: allow grub-efi-amd64
  • update to vzkernel-2.6.32-042stab078.28.src.rpm
  • update aacraid to aacraid-1.2.1-30200.src.rpm
  • update ixgbe to 3.16.1
  • update e1000e to 2.4.14
  • update igb to 4.3.0
- pve-firmware: 1.0-23

  • update for RHEL6.4 based kernels
Everybody is adviced to upgrade.

__________________
Best regards,

Martin Maurer
Proxmox VE project leader

Thanks Tom ! Just installed :)
 

cesarpk

Active Member
Mar 31, 2012
770
3
38
Excellent advert, Thanks Proxmox !!!

And let me to do a questions,
As with PVE 2.3 latest update/upgrade i have 3 problems:

1- Lost PVE Quorum because is enabled multicast_snooping on bridge
2- Make Bonding balance-alb over single unmanaged Switch
3- In today's times it is very common get a PC workstation with network interface of brand Realtek on the mainboard, and Red Hat Kernel generally has very old drivers for this brand

The big question:
Have you solved these issues with your new kernel?

Best regards
Cesar
 

mir

Famous Member
Apr 14, 2012
3,559
120
83
Copenhagen, Denmark
Hi all,

Upgraded to this kernel on two servers and tried doing a live migration of a CT. Result: Kernel crash on both source and destination leaving both completely unavailable even on the console
 

mir

Famous Member
Apr 14, 2012
3,559
120
83
Copenhagen, Denmark
Also rebooting is not possible due to complains from vzctl regarding crashed migration jobs. A question: Does upgrading to this new kernel require a reboot of all CT's before migration is working again?
 

riptide_wave

Member
Mar 21, 2013
73
2
8
Minnesota, USA
Hi all,

Upgraded to this kernel on two servers and tried doing a live migration of a CT. Result: Kernel crash on both source and destination leaving both completely unavailable even on the console

Had the same issue here, except only the source had a kernel panic for me. I have yet to see this happen again though since the update, and I was unable to find any dump in the syslogs. If it helps, both the source and target were running the new 2.6.32-107 kernel
 

mir

Famous Member
Apr 14, 2012
3,559
120
83
Copenhagen, Denmark
I have found out that the problems is related to one specific CT. This CT is AFAIK in no way special but starting it will result in a kernel panic on the host. What I did experience was that this CT was HA managed and fencing in action which caused kernel panic on source and destination. I will try to investigate this particular CT further.
 

macday

Member
Mar 10, 2010
408
0
16
Stuttgart / Germany
Thank you guys.

DRBD 8.4.3 is working with manual fix.
But there are some IPMI-Changes which broke my Dell OMSA Installation. So I had to upgrade OMSA to 7.3 and it worked again.

Cheers Mac
 

grharry

New Member
May 3, 2010
12
0
1
ucarp in a CT stopped working after upgrade in my case .. yes with the mac_filter=off setting ..

syslog says "eth0: SIOETHTOOL(ETHTOOL_GTSO) ioctl failed: Operation not permitted"
 

KingLEV

New Member
Jun 13, 2013
10
0
1
Russia, Novosibirsk
I have a problem with clone VM:
cat syslog |grep err
Jul 24 16:01:40 PMNode002 kernel: ACPI: IRQ0 used by override.
Jul 24 16:01:40 PMNode002 kernel: ACPI: IRQ2 used by override.
Jul 24 16:01:40 PMNode002 kernel: ACPI: IRQ9 used by override.
Jul 24 16:01:40 PMNode002 kernel: ACPI: Using IOAPIC for interrupt routing
Jul 24 16:01:40 PMNode002 kernel: ACPI: PCI Interrupt Routing Table [\_SB_.PCI0._PRT]
Jul 24 16:01:40 PMNode002 kernel: ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.P32_._PRT]
Jul 24 16:01:40 PMNode002 kernel: ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.PEX0._PRT]
Jul 24 16:01:40 PMNode002 kernel: ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.PEX2._PRT]
Jul 24 16:01:40 PMNode002 kernel: ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.EVRP._PRT]
Jul 24 16:01:40 PMNode002 kernel: ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.JEX0._PRT]
Jul 24 16:01:40 PMNode002 kernel: ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.JEX1._PRT]
Jul 24 16:01:40 PMNode002 kernel: ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.PEG1._PRT]
Jul 24 16:01:40 PMNode002 kernel: ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.PEG2._PRT]
Jul 24 16:01:40 PMNode002 kernel: ACPI: PCI Interrupt Link [LNKA] (IRQs 3 4 5 7 9 10 *11 12)
Jul 24 16:01:40 PMNode002 kernel: ACPI: PCI Interrupt Link [LNKB] (IRQs 3 4 5 7 9 10 11 12) *0, disabled.
Jul 24 16:01:40 PMNode002 kernel: ACPI: PCI Interrupt Link [LNKC] (IRQs 3 4 5 7 9 10 *11 12)
Jul 24 16:01:40 PMNode002 kernel: ACPI: PCI Interrupt Link [LNKD] (IRQs 3 4 5 7 9 10 11 12) *0, disabled.
Jul 24 16:01:40 PMNode002 kernel: ACPI: PCI Interrupt Link [LNKE] (IRQs 3 4 5 7 *9 10 11 12)
Jul 24 16:01:40 PMNode002 kernel: ACPI: PCI Interrupt Link [LNKF] (IRQs 3 4 5 7 9 10 11 12) *0, disabled.
Jul 24 16:01:40 PMNode002 kernel: ACPI: PCI Interrupt Link [LNKG] (IRQs 3 4 5 7 9 *10 11 12)
Jul 24 16:01:40 PMNode002 kernel: ACPI: PCI Interrupt Link [LNKH] (IRQs 3 4 5 7 *9 10 11 12)
Jul 24 16:01:40 PMNode002 kernel: e1000e 0000:00:19.0: Interrupt Throttling Rate (ints/sec) set to dynamic conservative mode
Jul 24 16:01:40 PMNode002 kernel: tpm_tis 00:08: tpm_transmit: tpm_send: error -62
Jul 24 16:01:40 PMNode002 kernel: EXT3-fs (dm-2): warning: mounting fs with errors, running e2fsck is recommended

Jul 24 16:02:20 PMNode002 dlm_controld[6708]: process_uevent online@ error -1 errno 2

Hardware:
i7 3820/64 GB RAM/1xST1000DM003

Somebody help me?IMG_20130724_151709.jpg
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!