Proxmox VE 5.3 released!

Discussion in 'Proxmox VE: Installation and configuration' started by martin, Dec 4, 2018.

  1. martin

    martin Proxmox Staff Member
    Staff Member

    Joined:
    Apr 28, 2005
    Messages:
    625
    Likes Received:
    289
    We are very excited to announce the general availability of Proxmox VE 5.3!

    Proxmox VE now integrates CephFS, a distributed, POSIX-compliant file system which serves as an interface to the Ceph storage (like the RBD). You can store backupfiles, ISO images, and container templates. CephFS can be created and configured easily with its Metadata server (MDS) in the GUI.

    We improved disk management and you can now add ZFS raid volumes, LVM, and LVMthin pools as well as additional disks with a traditional file system. The existing ZFS over iSCSI storage plug-in can now access LIO target in the Linux kernel. Other new features are nesting for LXC containers so you can use LXC and LXD inside containers or access NFS or CIFS. If you are adventurous, you can configure PCI passthrough and vGPUs via the GUI.

    Countless bugfixes and more smaller improvements are listed in the release notes.

    Release notes
    https://pve.proxmox.com/wiki/Roadmap#Proxmox_VE_5.3

    Video tutorial
    Watch our short introduction video What's new in Proxmox VE 5.3?

    Download
    https://www.proxmox.com/en/downloads
    Alternate ISO download:
    http://download.proxmox.com/iso/

    Source Code
    https://git.proxmox.com

    Bugtracker
    https://bugzilla.proxmox.com

    FAQ
    Q: Can I install Proxmox VE 5.3 on top of Debian Stretch?
    A: Yes, see https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Stretch

    Q: Can I upgrade Proxmox VE 5.x to 5.3 with apt?
    A: Yes, just via GUI or via CLI with apt update && apt dist-upgrade

    Q: Can I upgrade Proxmox VE 4.x to 5.3 with apt dist-upgrade?
    A: Yes, see https://pve.proxmox.com/wiki/Upgrade_from_4.x_to_5.0. If you Ceph on V4.x please also check https://pve.proxmox.com/wiki/Ceph_Jewel_to_Luminous. Please note, Proxmox VE 4.x is already end of support since June 2018, see Proxmox VE Support Lifecycle

    Many THANKS to our active community for all your feedback, testing, bug reporting and patch submitting!

    __________________
    Best regards,

    Martin Maurer
    Proxmox VE project leader
     
  2. alsicorp

    alsicorp New Member
    Proxmox VE Subscriber

    Joined:
    Sep 25, 2013
    Messages:
    7
    Likes Received:
    0
    Congrats!!
     
  3. dbayer

    dbayer New Member
    Proxmox VE Subscriber

    Joined:
    Apr 15, 2016
    Messages:
    29
    Likes Received:
    1
    Congratulations, it looks very impressive!

    Couple of questions.

    1. Does the GUI interface for Snapshots recognize snapshots being created by the pve-zsync utility? Right now if I create a snapshot vie the GUI it can't be used to restore, if it is older then any snapshot created by pve-zsync. Which would be fine if I could see the pve-zsync snapshots in the GUI.

    2. Can Ubuntu snaps be used in a container now that AppArmor is supported?

    Thanks,
    Daniel
     
  4. peterx

    peterx Member

    Joined:
    May 5, 2008
    Messages:
    39
    Likes Received:
    1
    Thanks,
    Nice progress!!
     
  5. morph027

    morph027 Active Member

    Joined:
    Mar 22, 2013
    Messages:
    398
    Likes Received:
    46
    Awesome and thank you for your hard work!
     
  6. EuroDomenii

    EuroDomenii Member
    Proxmox VE Subscriber

    Joined:
    Sep 30, 2016
    Messages:
    99
    Likes Received:
    15
  7. janos

    janos Member

    Joined:
    Aug 24, 2017
    Messages:
    134
    Likes Received:
    13
    Hello,
    • qemu-server: add ability to fstrim a guest filesystem after moving a disk (during move_disk or migrate with local disks)
    Is it possible to do this without disk migration?
     
  8. hawk128

    hawk128 New Member

    Joined:
    May 22, 2017
    Messages:
    9
    Likes Received:
    0
    Found a small issue with cephFS.
    I added
    Code:
    [mon]
             mon_allow_pool_delete = true
    
    into /etc/pve/ceph.conf earlier.

    Now I tried to add cephFS storage for test. It creates it, mounted it to Debian FS but could not mount it into Proxmox (? in GUI status on this storage).

    The solution is change line in /usr/share/perl5/PVE/Storage/CephTools.pm
    Code:
        @$server = sort map { $config->{$_}->{'mon addr'} } grep {/mon/} %{$config};
    to
        @$server = sort map { $config->{$_}->{'mon addr'} } grep {/mon./} %{$config};
     
  9. wolfgang

    wolfgang Proxmox Staff Member
    Staff Member

    Joined:
    Oct 1, 2014
    Messages:
    4,072
    Likes Received:
    250
    Hi dbayer,
    pve-zsync only sync the zsync snap so it is compatible with all other snapshot tools.
    If we would sync all snapshots pve-zsync is not compatible with the storage replica.
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  10. wolfgang

    wolfgang Proxmox Staff Member
    Staff Member

    Joined:
    Oct 1, 2014
    Messages:
    4,072
    Likes Received:
    250
    Hi janos,
    You can run trim with the command
    Code:
    qm guest cmd <vmid> fstrim
    
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
    janos and shantanu like this.
  11. t.lamprecht

    t.lamprecht Proxmox Staff Member
    Staff Member

    Joined:
    Jul 28, 2015
    Messages:
    909
    Likes Received:
    89
    Hmm, why do need t o allow pool deletion - it'd work also through our API not?

    The fix in general is probably wanted, but you want to esacpe the dot, as else it does not matches a literal dot but a single anything character. But thanks for the notice!
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  12. Alwin

    Alwin Proxmox Staff Member
    Staff Member

    Joined:
    Aug 1, 2017
    Messages:
    1,741
    Likes Received:
    151
    In the upgrade guide we put this into the global section, so we never encountered the general mon entry.
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  13. wolfgang

    wolfgang Proxmox Staff Member
    Staff Member

    Joined:
    Oct 1, 2014
    Messages:
    4,072
    Likes Received:
    250
    Hi RobFantini,

    this is a minor update so dist-upgrade is enough.
    Just update and a reboot for the new kernel is necessary.
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  14. jimnordb

    jimnordb New Member

    Joined:
    May 4, 2016
    Messages:
    8
    Likes Received:
    0
    Trying to get my intel 630 to passthrough VGPUs for vm's. i set

    in /etc/default/grub
    Code:
    GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on i915.enable_gvt=1"
    in etc/modules
    Code:
    vfio
    vfio_iommu_type1
    vfio_pci
    vfio_virqfd
    
    then i run update-grub and update-initramfs -u -k all

    after reboot i see with

    Code:
    root@thebox:~# lspci
    00:00.0 Host bridge: Intel Corporation 8th Gen Core Processor Host Bridge/DRAM Registers (rev 07)
    00:01.0 PCI bridge: Intel Corporation Xeon E3-1200 v5/E3-1500 v5/6th Gen Core Processor PCIe Controller (x16) (rev 07)
    00:02.0 VGA compatible controller: Intel Corporation Device 3e96
    and with
    Code:
    root@thebox:~# lshw -c video
      *-display
           description: VGA compatible controller
           product: Intel Corporation
           vendor: Intel Corporation
           physical id: 2
           bus info: pci@0000:00:02.0
           version: 00
           width: 64 bits
           clock: 33MHz
           capabilities: pciexpress msi pm vga_controller bus_master cap_list rom
           configuration: driver=i915 latency=0
           resources: irq:194 memory:90000000-90ffffff memory:b0000000-bfffffff ioport:5000(size=64) memory:c0000-dffff
      *-display
           description: VGA compatible controller
           product: ASPEED Graphics Family
           vendor: ASPEED Technology, Inc.
           physical id: 0
           bus info: pci@0000:07:00.0
           version: 41
           width: 32 bits
           clock: 33MHz
           capabilities: pm msi vga_controller cap_list
           configuration: driver=ast latency=0
           resources: irq:18 memory:91000000-91ffffff memory:92000000-9201ffff ioport:3000(size=128)
    grapich card shows up. but this is blank
    upload_2018-12-6_13-17-26.png

    i am guessing driver is not working correctly. here is my dmesg
    Code:
    root@thebox:~# dmesg | grep 'i915'
    [    0.000000] Command line: BOOT_IMAGE=/boot/vmlinuz-4.15.18-9-pve root=/dev/mapper/pve-root ro quiet intel_iommu=on i915.enable_gvt=1
    [    0.000000] Kernel command line: BOOT_IMAGE=/boot/vmlinuz-4.15.18-9-pve root=/dev/mapper/pve-root ro quiet intel_iommu=on i915.enable_gvt=1
    [    6.467511] i915 0000:00:02.0: vgaarb: changed VGA decodes: olddecodes=io+mem,decodes=none:owns=io+mem
    [    6.472821] [drm] Finished loading DMC firmware i915/kbl_dmc_ver1_01.bin (v1.1)
    [    6.597882] [drm] Initialized i915 1.6.0 20171023 for 0000:00:02.0 on minor 1
    [    7.053582] snd_hda_intel 0000:00:1f.3: bound 0000:00:02.0 (ops i915_audio_component_bind_ops [i915])
    [    7.560204] i915 0000:00:02.0: fb0: inteldrmfb frame buffer device
    
    System:
    Xeon 2176-G
    Supermicro X11SCZ-F
     
    #14 jimnordb, Dec 6, 2018
    Last edited: Dec 6, 2018
  15. dcsapak

    dcsapak Proxmox Staff Member
    Staff Member

    Joined:
    Feb 1, 2016
    Messages:
    2,924
    Likes Received:
    266
    sadly intel chose that coffee lake does not get this feature..
    quote from the official gvt-g documentation:
    https://github.com/intel/gvt-linux/wiki/GVTg_Setup_Guide
    also see for example this issue https://github.com/intel/gvt-linux/issues/53#issuecomment-430924130
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  16. jimnordb

    jimnordb New Member

    Joined:
    May 4, 2016
    Messages:
    8
    Likes Received:
    0
    Thanks for super quick reply!

    If you read the thread a patch is coming upstream. when do you think this will be released for proxmox users after patch is added?
     
  17. dcsapak

    dcsapak Proxmox Staff Member
    Staff Member

    Joined:
    Feb 1, 2016
    Messages:
    2,924
    Likes Received:
    266
    no way to tell yet, first it has to be upstreamed, then we can see what kernel versions will include this

    maybe we can only add this when we upgrade the kernel to a later version
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  18. jimnordb

    jimnordb New Member

    Joined:
    May 4, 2016
    Messages:
    8
    Likes Received:
    0
    Looking forward to this! Great update by the way!
     
  19. HaukeB

    HaukeB New Member

    Joined:
    Dec 6, 2018
    Messages:
    1
    Likes Received:
    0
    Hi,

    there is a "Emulating ARM virtual machines (experimental, mostly useful for development purposes)" notice on the release notes, but i don't find any information how to use/enable that feature. The documentation says "Qemu can emulate a great variety of hardware from ARM to Sparc, but Proxmox VE is only concerned with 32 and 64 bits PC clone emulation, since it represents the overwhelming majority of server hardware." which sounds like "no arm emulation" to me.

    Do i misinterpret the Realeasenotes?

    Greetings
    Hauke
     
  20. Alwin

    Alwin Proxmox Staff Member
    Staff Member

    Joined:
    Aug 1, 2017
    Messages:
    1,741
    Likes Received:
    151
    @HaukeB, no arm emulation is working in this release, but the documentation still needs some polishing. You can run arm container or qemu guests.

    Code:
    arch: <aarch64 | x86_64>
    Virtual processor architecture. Defaults to the host.
    For VMs.
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
    shantanu and morph027 like this.
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice