Meltdown and Spectre Linux Kernel fixes

gkovacs

Well-Known Member
Dec 22, 2008
507
48
48
Budapest, Hungary
We have installed the patch on a few servers last night (dual socket Westmere Xeons, single and dual socket Sandy Bridge and Ivy Bridge Xeons), all servers booted without any problems. There are no obvious performance regressions, all LXC containers and KVM guests operate within the same CPU budgets as before.

The kernel does not inform us of the CPU bug, but the page table isolation feature seems to be enabled. This is a dual socket Westmere Xeon, but the same happens and all architectures in our server farm:

Code:
root@proxmox:~# uname -v
#1 SMP PVE 4.4.98-102 (Sun, 7 Jan 2018 13:15:19 +0100)
root@proxmox:~# cat /proc/cpuinfo | grep bugs | uniq -c
     24 bugs            :
root@proxmox:~# dmesg |grep iso
[    0.000000] Kernel/User page tables isolation: enabled
 

fabian

Proxmox Staff Member
Staff member
Jan 7, 2016
6,224
1,061
164
update kernels for both 4.4 and 4.13 are available in pvetest:
pve-kernel (4.4.98-103) unstable; urgency=medium

* support loading newer AMD microcode

* fix backport of intel perf event causing crash on some HW

* update Spectre KVM PoC fix for AMD

-- Proxmox Support Team <support@proxmox.com> Mon, 8 Jan 2018 10:15:44 +0100

@anigwei , @stef1777 : please report back if this fixes your boot issue!

pve-kernel (4.13.13-35) unstable; urgency=medium

* KPTI: disable on AMD

* KPTI: add follow-up fixes

* update Spectre KVM PoC fix for AMD

-- Proxmox Support Team <support@proxmox.com> Mon, 8 Jan 2018 10:26:58 +0100

this kernel no longer prints the KPTI status on boot if it gets automatically disabled because the CPU is not affected by meltdown.

@joshin and other people still using the 4.10 kernel because of the SCSI kernel oops: the 4.13.13-4-pve kernel (4.13.13-35) also contains a revert of the buggy commit in question, please test and provide feedback! thanks.
 
  • Like
Reactions: chrone

morph027

Well-Known Member
Mar 22, 2013
446
60
48
Leipzig
morph027.gitlab.io
I see very slightly increase ... node not under full load, can try to pick another one with more load for comparison. (update and kexec soft-reboot just before the startup peak)

Bildschirmfoto vom 2018-01-08 12-59-56.png
image.png
image.png

System specs:

  • Supermicro X10DRi
  • 2 x Intel(R) Xeon(R) CPU E5-2667 v4 @ 3.20GHz
  • 256 GB
 
Last edited:

Sebastian2000

Member
Oct 31, 2017
80
1
8
41
Hello

I understand that proxmox must also solve the bug, but, is there a so hight risk if all VM have been parshes?
 

Xabi

New Member
Apr 25, 2016
5
0
1
39
Hello!

Be carefull, I just tried on one of our server with Proxmox VE 4.x with pve-kernel (4.4.98-102) on an HP DL120 G7 and the server crash at boot.

I got this just after Grub selection and screen refresh.

View attachment 6615

Reverted back to 4.4.98-101 with Grub, it work.

I feel that this story will cause us a lot of trouble.

Sincerely,


Same problem here.
 
May 17, 2017
21
0
6
36
Thanks for the update! Just wanted to report no issues here:

Code:
Base Board Information
    Manufacturer: Intel Corporation
    Product Name: S1200SP
    Version: H57532-210

[root@spice ~]# uname -a
Linux [redacted] 4.4.98-3-pve #1 SMP PVE 4.4.98-102 (Sun, 7 Jan 2018 13:15:19 +0100) x86_64 GNU/Linux
[root@spice ~]#

[root@spice ~]# grep 'Kernel/User page tables isolation' /var/log/syslog
Jan  8 16:20:26 spice kernel: [    0.000000] Kernel/User page tables isolation: enabled
[root@spice ~]#
 
Oct 11, 2016
15
0
21
I am wondering why the +pcid is not added to KVM boot.

I have done some testing with updated linux kernels, without this parameter the new patched Debian 9 Kernels are much slower compared to if +pcid is added for starting kvm.


-cpu SandyBridge,+pcid,+kvm_pv_unhalt,+kvm_pv_eoi,enforce,vendor=GenuineIntel -m 8192
vs
-cpu SandyBridge,+kvm_pv_unhalt,+kvm_pv_eoi,enforce,vendor=GenuineIntel

If cpu "host" or default is used, pcid is added to the guest system, but migrating between different HW Servers (HP gen8, gen9 and gen10) is not working anymore.

I would suggest adding +pcid for starting kvm
 
Dec 5, 2017
171
12
23
41
Also stuck on grub recovery on a ProLiant DL380 G6 using ZFS

How do you boot to the old kernel from the grub rescue shell ? Pretty sure my last working kernel was 4.13.8-2-pve with Proxmox 5.1 latest version

Here are my current options:
Screen Shot 2018-01-08 at 11.47.50 AM.png

Booting from the live ISO repair doesn't work either saying: Unable to find boot device
zpool list
from the debug installation shows no pools available


Thanks
 
Last edited:

fabian

Proxmox Staff Member
Staff member
Jan 7, 2016
6,224
1,061
164
Also stuck on grub recovery on a ProLiant DL380 G6 using ZFS

How do you boot to the old kernel from the grub rescue shell ? Pretty sure my last working kernel was 4.13.8-2-pve with Proxmox 5.1 latest version

Here are my current options:
View attachment 6621

Booting from the live ISO repair doesn't work either saying: Unable to find boot device
zpool list
from the debug installation shows no pools available


Thanks

this is in no way related to the topic of this thread, please open a new one.
 

Daniel0705

New Member
Sep 12, 2017
3
0
1
35
Hi,

is it right to just update the proxmox installation and not taking any actions in the guest (Windows)?
Like adding the registry key (QualityCompat) to automaticly download the patch via microsoft update?

Thank you.
 

Sich

New Member
Jan 8, 2018
6
0
1
40
I have just update one of my proxmox v5.1, and the last patch only fix the variant 3 of meltdown (like on debian stretch).
 

Attachments

  • Capture.PNG
    Capture.PNG
    31.8 KB · Views: 77

fabian

Proxmox Staff Member
Staff member
Jan 7, 2016
6,224
1,061
164
I have just update one of my proxmox v5.1, and the last patch only fix the variant 3 of meltdown (like on debian stretch).

yes (it only completely fixes variant 3 which is called "Meltdown", to be 100% correct).

the patches for (partial) Spectre mitigation on the kernel side are not yet finished upstream and still being heavily reworked currently. our kernel contains a specific fix to mitigate the Google Proof-Of-Concept exploit to read host/hypervisor memory from within a KVM guest. once kernel and KVM and Qemu patches to mitigate more of Spectre have been reviewed and finalized, we will include them in our kernel and other packages as well.
 
  • Like
Reactions: chrone

gmed

Well-Known Member
Dec 20, 2013
516
57
48
Upgraded 4 of our Servers tonight:

1x HP Proliant ML110 Gen7
1x Fujitsu TX150 S7
1x Supermicro and 1x Lenovo

The Supermicro-Board running ProxmoxVE 5.X, others on 4.X.
All has done fine, besides the ML11O Gen7.
Kernel Panic after reboot.
Booting with older Kernel was ok.

It seems better to be carefull with HP material?

Would be nice to get some advice in point of a dist-upgrade.
Wich kernel will be used during an upgrade from 4.X to 5.X?

Maybe it would be a good idea to use an "old" 5.X Kernel an keep the patched ones optional?
 

fabian

Proxmox Staff Member
Staff member
Jan 7, 2016
6,224
1,061
164
Upgraded 4 of our Servers tonight:

1x HP Proliant ML110 Gen7
1x Fujitsu TX150 S7
1x Supermicro and 1x Lenovo

The Supermicro-Board running ProxmoxVE 5.X, others on 4.X.
All has done fine, besides the ML11O Gen7.
Kernel Panic after reboot.
Booting with older Kernel was ok.

It seems better to be carefull with HP material?

Would be nice to get some advice in point of a dist-upgrade.
Wich kernel will be used during an upgrade from 4.X to 5.X?

Maybe it would be a good idea to use an "old" 5.X Kernel an keep the patched ones optional?

please always include which kernel package you installed in which version - there have been two public iterations already, and there will likely be more to follow over the next weeks. did you install 4.4.98-3-pve in version -103 or -102? -102 was reported to be problematic on some HP systems, -103 seems to have fixed that issue.
 

gmed

Well-Known Member
Dec 20, 2013
516
57
48
ML11O crashed with -102 version.
I'm going to check -103 at the weekend.
You will get feedback, then.

I've seen -103 is available right now.
Thanks for your hard work.
 
May 27, 2015
68
4
28
Hello!

Be carefull, I just tried on one of our server with Proxmox VE 4.x with pve-kernel (4.4.98-102) on an HP DL120 G7 and the server crash at boot.

I got this just after Grub selection and screen refresh.

View attachment 6615

Reverted back to 4.4.98-101 with Grub, it work.

I feel that this story will cause us a lot of trouble.

Sincerely,

Hi,

Exactly same issue with 4.4.98-102 with Fujitsu PRIMERGY TX140 S2. I cannot upgrade to 103, I cannot see it on pve-enterprise yet.

Thanks for your hard work!
 

fabian

Proxmox Staff Member
Staff member
Jan 7, 2016
6,224
1,061
164
Hi,

Exactly same issue with 4.4.98-102 with Fujitsu PRIMERGY TX140 S2. I cannot upgrade to 103, I cannot see it on pve-enterprise yet.

Thanks for your hard work!

it's available in pve-enterprise now.
 
  • Like
Reactions: carles89

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!