4.15 based test kernel for PVE 5.x available

Discussion in 'Proxmox VE: Installation and configuration' started by fabian, Mar 12, 2018.

  1. fabian

    fabian Proxmox Staff Member
    Staff Member

    Joined:
    Jan 7, 2016
    Messages:
    3,054
    Likes Received:
    464
    a pve-kernel-4.15 meta package depending on a preview build based on Ubuntu Bionic's 4.15 kernel is available on pvetest. it is provided as opt-in package in order to catch potential regressions and hardware incompatibilities early on, and allow testing on a wide range of systems before the default kernel series gets switched over to 4.15 and support for our 4.13 based kernel is phased out at some point in the future.

    in order to try it out on your test systems, configure the pvetest repository and run
    Code:
    apt update
    apt install pve-kernel-4.15
    
    the pve-kernel-4.15 meta package will keep the preview kernel updated, just like the pve-kernel-4.13 meta package (recently pulled out of the proxmox-ve package) does for the stable 4.13 based kernel.

    also on pvetest, you will find a pve-headers-4.15 header meta package in case you need them for third-party module building, a linux-tools-4.15 package with compatible perf, as well as an updated pve-firmware package with the latest blobs.

    notable changes besides the major kernel update include the removal of out-of-tree Intel NIC modules, which are not (yet) compatible with 4.15 kernels. this removal will be re-evaluated at some later point before the final switch to 4.15 as new stable kernel in PVE 5.

    there are no plans to support both 4.13 and 4.15 in the long term - once the testing phase of 4.15 is over 4.15 will become the new default kernel and 4.13 will receive no updates anymore.

    happy testing and looking forward to feedback!
     
    chrone and FibreFoX like this.
  2. apusgrz

    apusgrz New Member

    Joined:
    Oct 19, 2016
    Messages:
    7
    Likes Received:
    0
    when does it get into the stable repository ?
     
  3. tom

    tom Proxmox Staff Member
    Staff Member

    Joined:
    Aug 29, 2006
    Messages:
    12,899
    Likes Received:
    320
    As soon as we consider it as stable.

    Proxmox VE 5.2 will use 4.15 as default kernel, for 5.1 it will be an optional kernel.
     
  4. efeu

    efeu Member
    Proxmox VE Subscriber

    Joined:
    Nov 6, 2015
    Messages:
    71
    Likes Received:
    6
    CPU: Ryzen 1700

    machine: q35 is broke now
    CPU: host is broke now too, awful performance, high CPU load, Windows 10 is told that it has 265MB L3 Cache instead of 16MB, OpteronG4/G5 are broken, G3 awful performance

    well, unusable right now

    pveversion --verbose
    proxmox-ve: 5.1-42 (running kernel: 4.15.3-1-pve)
    pve-manager: 5.1-47 (running version: 5.1-47/97a08ab2)
    pve-kernel-4.13: 5.1-42
    pve-kernel-4.15: 5.1-1
    pve-kernel-4.15.3-1-pve: 4.15.3-1
    pve-kernel-4.13.13-6-pve: 4.13.13-42
    pve-kernel-4.13.13-5-pve: 4.13.13-38
    pve-kernel-4.13.13-3-pve: 4.13.13-34
    pve-kernel-4.13.4-1-pve: 4.13.4-26
    pve-kernel-4.10.17-4-pve: 4.10.17-24
    pve-kernel-4.10.17-3-pve: 4.10.17-23
    pve-kernel-4.10.17-2-pve: 4.10.17-20
    pve-kernel-4.10.17-1-pve: 4.10.17-18
    pve-kernel-4.10.8-1-pve: 4.10.8-7
    pve-kernel-4.10.5-1-pve: 4.10.5-5
    pve-kernel-4.10.1-2-pve: 4.10.1-2
    pve-kernel-4.4.35-1-pve: 4.4.35-77
    corosync: 2.4.2-pve3
    criu: 2.11.1-1~bpo90
    glusterfs-client: 3.8.8-1
    ksm-control-daemon: 1.2-2
    libjs-extjs: 6.0.1-2
    libpve-access-control: 5.0-8
    libpve-apiclient-perl: 2.0-4
    libpve-common-perl: 5.0-28
    libpve-guest-common-perl: 2.0-14
    libpve-http-server-perl: 2.0-8
    libpve-storage-perl: 5.0-17
    libqb0: 1.0.1-1
    lvm2: 2.02.168-pve6
    lxc-pve: 2.1.1-3
    lxcfs: 2.0.8-2
    novnc-pve: 0.6-4
    proxmox-widget-toolkit: 1.0-12
    pve-cluster: 5.0-21
    pve-container: 2.0-19
    pve-docs: 5.1-16
    pve-firewall: 3.0-5
    pve-firmware: 2.0-4
    pve-ha-manager: 2.0-5
    pve-i18n: 1.0-4
    pve-libspice-server1: 0.12.8-3
    pve-qemu-kvm: 2.11.1-3
    pve-xtermjs: 1.0-2
    pve-zsync: 1.6-15
    qemu-server: 5.0-23
    smartmontools: 6.5+svn4324-1
    spiceterm: 3.0-5
    vncterm: 1.5-3
    zfsutils-linux: 0.7.6-pve1~bpo9
     
    #4 efeu, Mar 13, 2018
    Last edited: Mar 13, 2018
  5. dcsapak

    dcsapak Proxmox Staff Member
    Staff Member

    Joined:
    Feb 1, 2016
    Messages:
    2,406
    Likes Received:
    213
    Bigkuhuna24 likes this.
  6. efeu

    efeu Member
    Proxmox VE Subscriber

    Joined:
    Nov 6, 2015
    Messages:
    71
    Likes Received:
    6
    What about the cpu model problem?

    The only way to get the VMs partialy running is with kvm64 CPU model, but latency is very high
     
    #6 efeu, Mar 13, 2018
    Last edited: Mar 13, 2018
  7. fabian

    fabian Proxmox Staff Member
    Staff Member

    Joined:
    Jan 7, 2016
    Messages:
    3,054
    Likes Received:
    464
    can you try downgrading to pve-qemu-2.9 ?
     
  8. Jospeh Huber

    Jospeh Huber Member

    Joined:
    Apr 18, 2016
    Messages:
    52
    Likes Received:
    2
  9. fabian

    fabian Proxmox Staff Member
    Staff Member

    Joined:
    Jan 7, 2016
    Messages:
    3,054
    Likes Received:
    464
    preview kernel is now available in pve-no-subscription as well, so no need to configure pvetest just for the kernel/header packages.
     
  10. morph027

    morph027 Active Member

    Joined:
    Mar 22, 2013
    Messages:
    371
    Likes Received:
    42
    Great, just installed on some lab machines (including one with PCIe passthrough..will see how it goes)
     
  11. Vasu Sreekumar

    Vasu Sreekumar Active Member

    Joined:
    Mar 3, 2018
    Messages:
    123
    Likes Received:
    34
    I ran kernel 4.15 for 54 hours with 5 LXC with cron to stop and start every 5 minutes.

    No issues at all.
     
    #11 Vasu Sreekumar, Mar 15, 2018
    Last edited: Mar 16, 2018
  12. efeu

    efeu Member
    Proxmox VE Subscriber

    Joined:
    Nov 6, 2015
    Messages:
    71
    Likes Received:
    6
    agent: 1
    balloon: 0
    bios: ovmf
    boot: cdn
    bootdisk: scsi0
    cores: 8
    cpu: kvm64
    efidisk0: ssdpool:vm-100-disk-1,size=128K
    #hostpci0: 07:00,pcie=1,x-vga=on
    hotplug: network,usb
    ide0: none,media=cdrom
    keyboard: de
    machine: q35
    memory: 12288
    name: win10
    net0: virtio=72:DC:5B:51:AD:D2,bridge=vmbr0
    numa: 0
    ostype: win10
    scsi0: ssdpool850:vm-100-disk-1,discard=on,size=447G
    scsihw: virtio-scsi-pci
    smbios1: uuid=76521a3b-440f-4ccb-980a-8f85d6043e98
    sockets: 1
    tablet: 0
    proxmox-ve: 5.1-42 (running kernel: 4.15.3-1-pve)
    pve-manager: 5.1-47 (running version: 5.1-47/97a08ab2)
    pve-kernel-4.13: 5.1-42
    pve-kernel-4.15: 5.1-1
    pve-kernel-4.15.3-1-pve: 4.15.3-1
    pve-kernel-4.13.13-6-pve: 4.13.13-42
    pve-kernel-4.13.13-5-pve: 4.13.13-38
    pve-kernel-4.13.13-3-pve: 4.13.13-34
    pve-kernel-4.13.4-1-pve: 4.13.4-26
    pve-kernel-4.10.17-4-pve: 4.10.17-24
    pve-kernel-4.10.17-3-pve: 4.10.17-23
    pve-kernel-4.10.17-2-pve: 4.10.17-20
    pve-kernel-4.10.17-1-pve: 4.10.17-18
    pve-kernel-4.10.8-1-pve: 4.10.8-7
    pve-kernel-4.10.5-1-pve: 4.10.5-5
    pve-kernel-4.10.1-2-pve: 4.10.1-2
    pve-kernel-4.4.35-1-pve: 4.4.35-77
    corosync: 2.4.2-pve3
    criu: 2.11.1-1~bpo90
    glusterfs-client: 3.8.8-1
    ksm-control-daemon: 1.2-2
    libjs-extjs: 6.0.1-2
    libpve-access-control: 5.0-8
    libpve-apiclient-perl: 2.0-4
    libpve-common-perl: 5.0-28
    libpve-guest-common-perl: 2.0-14
    libpve-http-server-perl: 2.0-8
    libpve-storage-perl: 5.0-17
    libqb0: 1.0.1-1
    lvm2: 2.02.168-pve6
    lxc-pve: 2.1.1-3
    lxcfs: 2.0.8-2
    novnc-pve: 0.6-4
    proxmox-widget-toolkit: 1.0-12
    pve-cluster: 5.0-21
    pve-container: 2.0-19
    pve-docs: 5.1-16
    pve-firewall: 3.0-5
    pve-firmware: 2.0-4
    pve-ha-manager: 2.0-5
    pve-i18n: 1.0-4
    pve-libspice-server1: 0.12.8-3
    pve-qemu-kvm: 2.9.1-9
    pve-xtermjs: 1.0-2
    pve-zsync: 1.6-15
    qemu-server: 5.0-23
    smartmontools: 6.5+svn4324-1
    spiceterm: 3.0-5
    vncterm: 1.5-3
    zfsutils-linux: 0.7.6-pve1~bpo9

    Tried following:
    cpu: host
    Result: Verry slow (unusable) in top it eats 700%+ CPU while it should idle (take over a minute until taks bar is shown)

    args: -cpu host
    Result: Windows wont boot

    args: -cpu host,+lahf_lm,+sep,+kvm_pv_unhalt,+kvm_pv_eoi,hv_vendor_id=proxmox,hv_spinlocks=0x1fff,hv_vapic,hv_time,hv_reset,hv_vpindex,hv_runtime,hv_relaxed,enforce,kvm=off
    Result: VM very slow and reboots after a while on load w/o any log Entry

    Same happens on -cpu max or EPYC

    The only way is kvm64 model, but a lot of CPU extensions are missing, which results in way lower performance
     
    #12 efeu, Mar 16, 2018
    Last edited: Mar 16, 2018
    chrone likes this.
  13. fabian

    fabian Proxmox Staff Member
    Staff Member

    Joined:
    Jan 7, 2016
    Messages:
    3,054
    Likes Received:
    464
    we'll see whether we can reproduce this somehow on our EPYC and Ryzen testlab hardware..
     
    efeu likes this.
  14. efeu

    efeu Member
    Proxmox VE Subscriber

    Joined:
    Nov 6, 2015
    Messages:
    71
    Likes Received:
    6
  15. fabian

    fabian Proxmox Staff Member
    Staff Member

    Joined:
    Jan 7, 2016
    Messages:
    3,054
    Likes Received:
    464
    so it seems we can reproduce some issues on new AMD hardware, we'll test whether the next 4.15 which is already in the pipeline fixes them and investigate otherwise. thanks for the feedback!
     
    efeu likes this.
  16. morph027

    morph027 Active Member

    Joined:
    Mar 22, 2013
    Messages:
    371
    Likes Received:
    42
    Sidenote: my two 4.15 test subjects are still running without issues...
     
  17. alexskysilk

    alexskysilk Active Member
    Proxmox VE Subscriber

    Joined:
    Oct 16, 2015
    Messages:
    392
    Likes Received:
    43
  18. fireon

    fireon Well-Known Member
    Proxmox VE Subscriber

    Joined:
    Oct 25, 2010
    Messages:
    2,566
    Likes Received:
    137
    Tested here on my backupserver 4.15.3-1-pve #1 SMP PVE 4.15.3-1. Works fine, but the flag "spec_ctrl" is no build in.
     
  19. fabian

    fabian Proxmox Staff Member
    Staff Member

    Joined:
    Jan 7, 2016
    Messages:
    3,054
    Likes Received:
    464
    the spectre/meltdown mitigations are closer to the upstream ones now, which means IBRS/IBPB usage will slowly get added back following upstream stable commits.
     
    fireon likes this.
  20. Vasu Sreekumar

    Vasu Sreekumar Active Member

    Joined:
    Mar 3, 2018
    Messages:
    123
    Likes Received:
    34
    4.15.10-1-pve kernel runnign without any problems
     
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice