Search results

  1. A

    Issue with latest (4.13.13-5-pve) kernel?

    >> Oops: 0010 [#4] SMP PTI maybe related to meltdown protection ? can you try to add in /etc/default/grub kernel options => nopti then #update-grub and reboot
  2. A

    Kernel update

    >>gzip: stdout: No space left on device remove old kernel files manually from /boot/
  3. A

    Meltdown and Spectre Linux Kernel fixes

    yes. And you need a recent kernel in your vm too (>=4.14), to have the benefit of PCID
  4. A

    Meltdown and Spectre Linux Kernel fixes

    I'm waiting for retpoline integration in ubuntu kernel, seem to be faster to fix variant2.
  5. A

    Meltdown and Spectre Linux Kernel fixes

    https://pve.proxmox.com/wiki/Package_Repositories#_proxmox_ve_no_subscription_repository
  6. A

    Meltdown and Spectre Linux Kernel fixes

    don't install microcode for now, it's known to be buggy and do instability
  7. A

    guest on kernel 4.14-12 fails to show NF conntrack

    maybe because kpti avoid access to kernel memory (so conntrack), from userland ? workaround : #conntrack -L|wc -l ?
  8. A

    ocfs2 kernel bug

    ocfs2 is not dead (a lot of commit on the dev mailing list), but it's full of bug. (I'm using in production in some vms, and since kernel > 3.16, I have a lot of random crash ). for vm hosting, it's better to use a shared lvm.
  9. A

    Live migration with local directories

    Sorry, I'm a lot busy with spectre and meltdown currently, I'll work on them next month. (on last proxmox 5.1) Edit : My previous patches for proxmox4 where posted last year here: https://pve.proxmox.com/pipermail/pve-devel/2017-February/025441.html
  10. A

    VM lost connectivity after live migration

    seem to proxmox version ? guest os ? they are known bug with old qemu and old guest kernel too, with virtio nic not send gratuitous arp after live migration
  11. A

    fuckwit/kaiser/kpti

    cpumodel=kvm64 protected you against spectre in your vm, but not meltdown. (you need to patch your guest kernel) host kernel need to be updated to avoid that a vm access to memory of another vm.
  12. A

    Meltdown/spectre cpu vulns

    https://forum.proxmox.com/threads/fuckwit-kaiser-kpti.39025/
  13. A

    Hyper-v Gen 2 Windows Guest conversion

    It's allow format conversion too. (--format qcow2) qm importdisk <vmid> <source> <storage> [OPTIONS] Import an external disk image as an unused disk in a VM. The image format has to be supported by qemu-img(1). <vmid>: <integer> (1 - N) The (unique) ID of...
  14. A

    Hyper-v Gen 2 Windows Guest conversion

    4/5/6 : proxmox 5 have "qm importdisk <vmid> <source> <storage>" (import any disk format to any supported proxmox storage) Do you have tested to enable ovmf to have uefi support ? (instead converting all to mbr)
  15. A

    [SOLVED] how to disable ksm sharing in proxmox v3.4

    /etc/default/ksmtuned # start ksmtuned at boot [yes|no] START=no
  16. A

    CephFS MDS Failover

    strange, mds failover is automatic for me without any tuning my ceph -w output: "mds: cephfs-1/1/1 up {0=myhost1.lan=up:active}, 2 up:standby" Note the 2 standby nodes. are your sure that all mds daemons are running on your cluster ?
  17. A

    ceph : [client] rbd cache = true override qemu cache=none|writeback

    no,it's has been fixed a long time ago. qemu cache=none -> rbd_cache=false qemu cache=write-> rbd_cache=true
  18. A

    Proxmox 5.1 / Ceph / Linked Clones

    you should have the baseimageid in your linked clone disk path.
  19. A

    How to best migrate to new host?

    I'll try to rebase it on last proxmox 5.X next month.
  20. A

    Is Ceph too slow and how to optimize it?

    you need to restart your ceph cluster (mon/osd), and all the vms.