Recent content by sommarnatt

  1. Hot-plug Hard Disk started to fail on most VMs

    Hi! For some reason the guest VMs stopped accepting hot plug. First I thought it was because of all PCI slots being full but it doesn't look that way. Long-shot but the VMs without issues haven't been rebooted with +pcid cpu flag yet. Anyone solved this one? Mar 8 11:20:09 xxxxxx kernel...
  2. Meltdown and Spectre Linux Kernel fixes

    If the kernel for your UCS system has backported PCID support, then it might have less of a performance impact with PCID on in proxmox.
  3. Meltdown and Spectre Linux Kernel fixes

    I'm not affiliated with proxmox, but pulling stuff in from testing repo to a production server should be avoided. We might have to wait for upstream fixes which should be available on monday then wait a day or so for Proxmox to pull it in and test...
  4. Meltdown and Spectre Linux Kernel fixes

    No, PCID is a CPU feature from 2010 that hasn't really been used until linux kernel 4.14. Now with meltdown it's actually useable to counteract some of the performance loss. However, I haven't done any benchmarking but if you're going to reboot your Guests anyway, then make sure to add PCID...
  5. Meltdown and Spectre Linux Kernel fixes

    There's a spectre variant 1 PoC around that lets you read the RAM of the host from a KVM guest. You should at least grab the latest PVE kernel with the fix for that PoC. Also, there's microcode available already from intel for spectre, but we still need to wait for the kernel updates as well...
  6. Meltdown and Spectre Linux Kernel fixes

    Well - I wouldn't recommend running with cpu type: host if you are running a cluster of several proxmox servers since they might differ in CPU architecture. That might lead to live migration difficulty (when migrating from a node with flags not available in newer node). The way we do it is that...
  7. Meltdown and Spectre Linux Kernel fixes

    Did you log in to a proxmox host webgui with an updated pve-manager?If you log on to one with old pve-manager you wont see it anywhere (had the same issue where i just updated one host, but logged in through another and didnt see it)
  8. Meltdown and Spectre Linux Kernel fixes

    Oh, intel just updated their site today with microcode updates: https://downloadcenter.intel.com/download/27431/Linux-Processor-Microcode-Data-File?v=t
  9. Meltdown and Spectre Linux Kernel fixes

    You should at the very least patch Meltdown, as it is the easiest one to "use". So that means patching guest, rebooting it. Spectre needs microcode update, intel has release some already to certain companies like HP, Dell, Supermicro and they've created bios updates for some cpus / servers. It...
  10. Meltdown and Spectre Linux Kernel fixes

    https://www.qemu.org/2018/01/04/spectre/ https://lists.nongnu.org/archive/html/qemu-devel/2018-01/msg01386.html
  11. fuckwit/kaiser/kpti

    I'd say it depends on what you're running on those KVM guests. Unpatched meltdown might mean that they can read VM guest RAM on the machine they're on, so if you're running several users or applications that might have vulnerabilities they might be exploited to run meltdown code and read the...
  12. drbd9 in production?

    Hi, So, has anyone been using drbd9 with redundancy 3 in a production environment? Any issues so far or things to consider?
  13. Upgrading Proxmox but keeping ceph version possible?

    Hi! We're running Proxmox 3.2-2 and we'd like to upgrade it to 3.4. Is it possible to keep ceph packages (both librados and osd/mon packages) on the version they're at now but still update the rest of pve? Can we just hold those packages through apt/dpkg or will there be dependency issues?
  14. Docker containers, new interesting project

    Hi 1nerdyguy - did you install CoreOS from ISO or did you successfully manage to use their kvm/qemu ISO from coreos site? In that case, how do you manage to start import it?
  15. KVM/Qemu Online Migration often ends up with 100% cpu, no ping, frozen VM

    Tried to disable intel_pstate and reboot the hosts but that didn't work out (although now cpuinfo reports a stable frequency). However the solution was to change clocksource in the GUEST from kvm_clock to tsc. Now we're able to live migrate CentOS / CloudLinux guests. It seems to be a bug in...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!