fuckwit/kaiser/kpti

Dear @fabian , what do you think about degradate of performance with new patch? It is critical for db like postgresql... I think we need an choise for enable it.
 
A check box in the interface, which would allow to disable/enable it for entire cluster, and see it is or not disabled, would be easier to manage. And you would see it. If it is a parameter in grub config file, it is easy to not notice it.
 
not many details are known yet. we will provide patched kernels for 4.4 and 5.1 as soon as final patches are available. 3.4 is EOL, there haven't been any updates for quite a while, and there won't be any updates now either.

Well, would ask to reconsider as RedHat provides updates and OpenVZ is certainly spinning their update-machine, it would be a nice move to repackage this kernel for 3.4 ad those still in need.
 
  • Like
Reactions: Shazan and Sakis
Well, Debian supports wheezy till May 2018. Ubuntu 12.04 is till April 2017 though. Its a little irrelevant in this topic check https://forum.proxmox.com/threads/understanding-proxmox-3-4-eol-and-4-0.25079 for more detailed info and https://forum.proxmox.com/threads/proxmox-ve-3-4-support-lifecycle.26205/ and https://forum.proxmox.com/threads/proxmox-ve-support-lifecycle.35755/#post-177492

If updates from 3 to 4 and 4 to 5 where flawless i wouldn't keep one cluster still in PVE 3.
PVE 4 is ending soon also...

I dont expect that we will get kernel patch for PVE 3.
 
Well, Debian supports wheezy till May 2018. Ubuntu 12.04 is till April 2017 though. Its a little irrelevant in this topic check https://forum.proxmox.com/threads/understanding-proxmox-3-4-eol-and-4-0.25079 for more detailed info and https://forum.proxmox.com/threads/proxmox-ve-3-4-support-lifecycle.26205/ and https://forum.proxmox.com/threads/proxmox-ve-support-lifecycle.35755/#post-177492

If updates from 3 to 4 and 4 to 5 where flawless i wouldn't keep one cluster still in PVE 3.
PVE 4 is ending soon also...

I dont expect that we will get kernel patch for PVE 3.

In the end, it’s open source and we will help ourselves anyways...
 
Well, would ask to reconsider as RedHat provides updates and OpenVZ is certainly spinning their update-machine, it would be a nice move to repackage this kernel for 3.4 ad those still in need.

OpenVZ needed a special kernel. So it is not sure it will get patches for this flaw. It is a bit like Xen, where no patches are yet available.
kvm and lxc are maintained inside the standard Linux kernel, so will benefit from vanilla kernel patches.
 
OpenVZ needed a special kernel. So it is not sure it will get patches for this flaw. It is a bit like Xen, where no patches are yet available.
kvm and lxc are maintained inside the standard Linux kernel, so will benefit from vanilla kernel patches.
If I am not wrong, PVE 3.4 is based on the ovz rhel kernel so, when ovz team releases the patched kernel, would be nice and a responsible from the Proxmox team to help us to keep our 3.4 installations secure, even if it is EOL.
 
I have compiled a kernel for Proxmox 3.x myself as I still have many OpenVZ nodes running.
The kernel has been tested and so far works fine for me.

You can download it at: https://git.vnetso.com/henryspanka/pve-kernel-2.6.32/tags/v2.6.32-49-pve_2.6.32-188
Feel free to check the source code and compile yourself if you don't trust me :)

Install with and then reboot:
Code:
dpkg -i pve-kernel-2.6.32-49-pve_2.6.32-188_amd64.deb
 
  • Like
Reactions: hk@ and Shazan
Must a attacker not first have access to your network and has special rights ? Until what level do a firewall protect against Meltdown and Spectre ? What about Snort IDS , OSSEC systems ? Just some thoughts. :rolleyes:
 
Its seems that the problem has impact on containers, not directly full virtualization machines Read this: Source: https://meltdownattack.com/meltdown.pdf

https://spectreattack.com/

many hosting or cloud providers do not have an abstraction layer for virtual memory. In such environments, which typically use containers, such as Docker or OpenVZ, the kernel is shared among all guests. Thus, the isolation between guests can simply be
circumvented with Meltdown, fully exposing the data of all other guests on the same host. For these providers,changing their infrastructure to full virtualization or us- ing software workarounds.
 
Must a attacker not first have access to your network and has special rights ? Until what level do a firewall protect against Meltdown and Spectre ? What about Snort IDS , OSSEC systems ? Just some thoughts. :rolleyes:
Yes correct, however in a hosting environment (VMs, Webspace, etc.) you can not control what applications users run on their servers and they can exploit this to read memory from other virtual machines or the host.

Its seems that the problem has impact on containers, not directly full virtualization machines Read this: Source: https://meltdownattack.com/meltdown.pdf

https://spectreattack.com/

many hosting or cloud providers do not have an abstraction layer for virtual memory. In such environments, which typically use containers, such as Docker or OpenVZ, the kernel is shared among all guests. Thus, the isolation between guests can simply be
circumvented with Meltdown, fully exposing the data of all other guests on the same host. For these providers,changing their infrastructure to full virtualization or us- ing software workarounds.
Meltdown indeed only has an impact on containers and not Virtual Machines. However with Spectre it's possible to read host memory from within a VM (Type 2) so full virtualisation is not a full workaround and only mitigates the Meltdown bug.

For more information for VMs see: https://www.qemu.org/2018/01/04/spectre/
 
Yes correct, however in a hosting environment (VMs, Webspace, etc.) you can not control what applications users run on their servers and they can exploit this to read memory from other virtual machines or the host.

Yes i see your point here. Something to be aware of.


Meltdown indeed only has an impact on containers and not Virtual Machines. However with Spectre it's possible to read host memory from within a VM (Type 2) so full virtualisation is not a full workaround and only mitigates the Meltdown bug.

For more information for VMs see: https://www.qemu.org/2018/01/04/spectre/

Yes i did see the article:

Patching the host kernel is sufficient to block attacks from guests to the host. On the other hand, in order to protect the guest kernel from a malicious userspace, updates are also needed to the guest kernel and, depending on the processor architecture, to QEMU.

So some thoughts,

You could as a workaround, also update the guests kernel. Later the
host, as it needs a reboot. Just a temporary solution. And also move some containers into that KVM. So you have the host isolated from the treat. And of course always protect the host with a firewall. Just to have some things separated. Just like a submarine has chambers. So in case one get flooded. You can shutdown one unit. :rolleyes:
 
Yes i did see the article:

Patching the host kernel is sufficient to block attacks from guests to the host. On the other hand, in order to protect the guest kernel from a malicious userspace, updates are also needed to the guest kernel and, depending on the processor architecture, to QEMU.

So some thoughts,

You could as a workaround, also update the guests kernel. Later the
host, as it needs a reboot. Just a temporary solution. And also move some containers into that KVM. So you have the host isolated from the treat. And of course always protect the host with a firewall. Just to have some things separated. Just like a submarine has chambers. So in case one get flooded. You can shutdown one unit. :rolleyes:

:D. Yes of course if you control the virtualisation environment. However, as in my case, customers have their own VMs running on my hypervisors and I don't have access to the VMs and I can not trust my customers to not try to use this vulnerability.
 
so i assume some of us will be upgrading bios on motherboards to fix the issue. making that easy to do would be a good thing.

at https://planet.debian.org/ check this for an example of someone sharing how to do 2 different hardware type bios upgrade : http://sven.stormbind.net/blog/posts/misc_bios_updates_dell_latitude_lenovo_thinkpad/
we have 10 thinkpads so I'll follow the above.

does someone have a link of info on how to set up a supermicro boot usb using linux cli?

a use at your own risk wiki page would be good. here is a start , delete if not needed: https://pve.proxmox.com/wiki/Host-bios-upgrade
 
Usually the OEMs provide some documentation on how to update their system best. Lenovo offers "System Update" for Windows, as well as different options for Unix etc., and the same is for Dell, HPE, you name it. If you have a remote console available like iDRAC (Dell), ILO (HPE), iRMC (FTS) and others, you can access the system's BIOS from remote via a virtual terminal application, and perform the necessary BIOS and Firmware upgrades as if you were sitting in front of the system.

My recommendation is to always check the BIOS settings after the update and before performing the first OS reboot, as I've already seen some lousy Intel boards for example, that did reset their settings after an update to factory mode and which gives you hell if it involves Onboard RAID controllers...

For notebooks, also all of the named vendors offer central client management solutions that allow you to remotly rollout updates, either via their own solution or with e.g. SCCM.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!