Proxmox VE 5.3 released!

martin

Proxmox Staff Member
Staff member
Apr 28, 2005
641
364
63
We are very excited to announce the general availability of Proxmox VE 5.3!

Proxmox VE now integrates CephFS, a distributed, POSIX-compliant file system which serves as an interface to the Ceph storage (like the RBD). You can store backupfiles, ISO images, and container templates. CephFS can be created and configured easily with its Metadata server (MDS) in the GUI.

We improved disk management and you can now add ZFS raid volumes, LVM, and LVMthin pools as well as additional disks with a traditional file system. The existing ZFS over iSCSI storage plug-in can now access LIO target in the Linux kernel. Other new features are nesting for LXC containers so you can use LXC and LXD inside containers or access NFS or CIFS. If you are adventurous, you can configure PCI passthrough and vGPUs via the GUI.

Countless bugfixes and more smaller improvements are listed in the release notes.

Release notes
https://pve.proxmox.com/wiki/Roadmap#Proxmox_VE_5.3

Video tutorial
Watch our short introduction video What's new in Proxmox VE 5.3?

Download
https://www.proxmox.com/en/downloads
Alternate ISO download:
http://download.proxmox.com/iso/

Source Code
https://git.proxmox.com

Bugtracker
https://bugzilla.proxmox.com

FAQ
Q: Can I install Proxmox VE 5.3 on top of Debian Stretch?
A: Yes, see https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Stretch

Q: Can I upgrade Proxmox VE 5.x to 5.3 with apt?
A: Yes, just via GUI or via CLI with apt update && apt dist-upgrade

Q: Can I upgrade Proxmox VE 4.x to 5.3 with apt dist-upgrade?
A: Yes, see https://pve.proxmox.com/wiki/Upgrade_from_4.x_to_5.0. If you Ceph on V4.x please also check https://pve.proxmox.com/wiki/Ceph_Jewel_to_Luminous. Please note, Proxmox VE 4.x is already end of support since June 2018, see Proxmox VE Support Lifecycle

Many THANKS to our active community for all your feedback, testing, bug reporting and patch submitting!

__________________
Best regards,

Martin Maurer
Proxmox VE project leader
 
Apr 15, 2016
29
1
3
49
Congratulations, it looks very impressive!

Couple of questions.

1. Does the GUI interface for Snapshots recognize snapshots being created by the pve-zsync utility? Right now if I create a snapshot vie the GUI it can't be used to restore, if it is older then any snapshot created by pve-zsync. Which would be fine if I could see the pve-zsync snapshots in the GUI.

2. Can Ubuntu snaps be used in a container now that AppArmor is supported?

Thanks,
Daniel
 

janos

Member
Aug 24, 2017
150
13
18
Hungary
Hello,
  • qemu-server: add ability to fstrim a guest filesystem after moving a disk (during move_disk or migrate with local disks)
Is it possible to do this without disk migration?
 

hawk128

New Member
May 22, 2017
11
0
1
35
Found a small issue with cephFS.
I added
Code:
[mon]
         mon_allow_pool_delete = true
into /etc/pve/ceph.conf earlier.

Now I tried to add cephFS storage for test. It creates it, mounted it to Debian FS but could not mount it into Proxmox (? in GUI status on this storage).

The solution is change line in /usr/share/perl5/PVE/Storage/CephTools.pm
Code:
    @$server = sort map { $config->{$_}->{'mon addr'} } grep {/mon/} %{$config};
to
    @$server = sort map { $config->{$_}->{'mon addr'} } grep {/mon./} %{$config};
 

wolfgang

Proxmox Staff Member
Staff member
Oct 1, 2014
4,763
316
83
Hi dbayer,
1. Does the GUI interface for Snapshots recognize snapshots being created by the pve-zsync utility? Right now if I create a snapshot vie the GUI it can't be used to restore, if it is older then any snapshot created by pve-zsync. Which would be fine if I could see the pve-zsync snapshots in the GUI.
pve-zsync only sync the zsync snap so it is compatible with all other snapshot tools.
If we would sync all snapshots pve-zsync is not compatible with the storage replica.
 

t.lamprecht

Proxmox Staff Member
Staff member
Jul 28, 2015
1,303
190
63
South Tyrol/Italy
Found a small issue with cephFS.
I added
Code:
[mon]
         mon_allow_pool_delete = true
into /etc/pve/ceph.conf earlier.

Now I tried to add cephFS storage for test. It creates it, mounted it to Debian FS but could not mount it into Proxmox (? in GUI status on this storage).

The solution is change line in /usr/share/perl5/PVE/Storage/CephTools.pm
Code:
    @$server = sort map { $config->{$_}->{'mon addr'} } grep {/mon/} %{$config};
to
    @$server = sort map { $config->{$_}->{'mon addr'} } grep {/mon./} %{$config};
Hmm, why do need t o allow pool deletion - it'd work also through our API not?

The fix in general is probably wanted, but you want to esacpe the dot, as else it does not matches a literal dot but a single anything character. But thanks for the notice!
 

Alwin

Proxmox Staff Member
Staff member
Aug 1, 2017
2,572
222
63
Hmm, why do need t o allow pool deletion - it'd work also through our API not?
In the upgrade guide we put this into the global section, so we never encountered the general mon entry.
 

jimnordb

Member
May 4, 2016
30
2
8
29
Trying to get my intel 630 to passthrough VGPUs for vm's. i set

in /etc/default/grub
Code:
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on i915.enable_gvt=1"
in etc/modules
Code:
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd
then i run update-grub and update-initramfs -u -k all

after reboot i see with

Code:
root@thebox:~# lspci
00:00.0 Host bridge: Intel Corporation 8th Gen Core Processor Host Bridge/DRAM Registers (rev 07)
00:01.0 PCI bridge: Intel Corporation Xeon E3-1200 v5/E3-1500 v5/6th Gen Core Processor PCIe Controller (x16) (rev 07)
00:02.0 VGA compatible controller: Intel Corporation Device 3e96
and with
Code:
root@thebox:~# lshw -c video
  *-display
       description: VGA compatible controller
       product: Intel Corporation
       vendor: Intel Corporation
       physical id: 2
       bus info: pci@0000:00:02.0
       version: 00
       width: 64 bits
       clock: 33MHz
       capabilities: pciexpress msi pm vga_controller bus_master cap_list rom
       configuration: driver=i915 latency=0
       resources: irq:194 memory:90000000-90ffffff memory:b0000000-bfffffff ioport:5000(size=64) memory:c0000-dffff
  *-display
       description: VGA compatible controller
       product: ASPEED Graphics Family
       vendor: ASPEED Technology, Inc.
       physical id: 0
       bus info: pci@0000:07:00.0
       version: 41
       width: 32 bits
       clock: 33MHz
       capabilities: pm msi vga_controller cap_list
       configuration: driver=ast latency=0
       resources: irq:18 memory:91000000-91ffffff memory:92000000-9201ffff ioport:3000(size=128)
grapich card shows up. but this is blank
upload_2018-12-6_13-17-26.png

i am guessing driver is not working correctly. here is my dmesg
Code:
root@thebox:~# dmesg | grep 'i915'
[    0.000000] Command line: BOOT_IMAGE=/boot/vmlinuz-4.15.18-9-pve root=/dev/mapper/pve-root ro quiet intel_iommu=on i915.enable_gvt=1
[    0.000000] Kernel command line: BOOT_IMAGE=/boot/vmlinuz-4.15.18-9-pve root=/dev/mapper/pve-root ro quiet intel_iommu=on i915.enable_gvt=1
[    6.467511] i915 0000:00:02.0: vgaarb: changed VGA decodes: olddecodes=io+mem,decodes=none:owns=io+mem
[    6.472821] [drm] Finished loading DMC firmware i915/kbl_dmc_ver1_01.bin (v1.1)
[    6.597882] [drm] Initialized i915 1.6.0 20171023 for 0000:00:02.0 on minor 1
[    7.053582] snd_hda_intel 0000:00:1f.3: bound 0000:00:02.0 (ops i915_audio_component_bind_ops [i915])
[    7.560204] i915 0000:00:02.0: fb0: inteldrmfb frame buffer device
System:
Xeon 2176-G
Supermicro X11SCZ-F
 
Last edited:

dcsapak

Proxmox Staff Member
Staff member
Feb 1, 2016
3,706
338
83
31
Vienna
Xeon 2176-G
sadly intel chose that coffee lake does not get this feature..
quote from the official gvt-g documentation:
For client platforms, 5th, 6th, 7th or 7th SoC Generation Intel® Core Processor Graphics is required. For server platforms, E3_v4, E3_v5 or E3_v6 Xeon Processor Graphics is required.
https://github.com/intel/gvt-linux/wiki/GVTg_Setup_Guide
also see for example this issue https://github.com/intel/gvt-linux/issues/53#issuecomment-430924130
 

jimnordb

Member
May 4, 2016
30
2
8
29
Thanks for super quick reply!

If you read the thread a patch is coming upstream. when do you think this will be released for proxmox users after patch is added?
 

dcsapak

Proxmox Staff Member
Staff member
Feb 1, 2016
3,706
338
83
31
Vienna
Thanks for super quick reply!

If you read the thread a patch is coming upstream. when do you think this will be released for proxmox users after patch is added?
no way to tell yet, first it has to be upstreamed, then we can see what kernel versions will include this

maybe we can only add this when we upgrade the kernel to a later version
 

HaukeB

New Member
Dec 6, 2018
1
0
1
37
Hi,

there is a "Emulating ARM virtual machines (experimental, mostly useful for development purposes)" notice on the release notes, but i don't find any information how to use/enable that feature. The documentation says "Qemu can emulate a great variety of hardware from ARM to Sparc, but Proxmox VE is only concerned with 32 and 64 bits PC clone emulation, since it represents the overwhelming majority of server hardware." which sounds like "no arm emulation" to me.

Do i misinterpret the Realeasenotes?

Greetings
Hauke
 

Alwin

Proxmox Staff Member
Staff member
Aug 1, 2017
2,572
222
63
@HaukeB, no arm emulation is working in this release, but the documentation still needs some polishing. You can run arm container or qemu guests.

Code:
arch: <aarch64 | x86_64>
Virtual processor architecture. Defaults to the host.
For VMs.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!