Proxmox VE 5.4 released!

martin

Proxmox Staff Member
Staff member
Apr 28, 2005
754
1,740
223
We are very pleased to announce the general availability of Proxmox VE 5.4.

Built on Debian 9.8 (Stretch) and a specially modified Linux Kernel 4.15, this version of Proxmox VE introduces a new wizard for installing Ceph storage via the user interface, and brings enhanced flexibility with HA clustering, hibernation support for virtual machines, and support for Universal Second Factor (U2F) authentication.

The new features of Proxmox VE 5.4 focus on usability and simple management of the software-defined infrastructure as well as on security management.

Countless bugfixes and more smaller improvements are listed in the release notes.

Release notes
https://pve.proxmox.com/wiki/Roadmap#Proxmox_VE_5.4

Video tutorial
What's new in Proxmox VE 5.4?

Download
https://www.proxmox.com/en/downloads
Alternate ISO download:
http://download.proxmox.com/iso/

Dokumentation
https://pve.proxmox.com/pve-docs/

Source Code
https://git.proxmox.com

Bugtracker
https://bugzilla.proxmox.com

FAQ
Q: Can I install Proxmox VE 5.4 on top of Debian Stretch?
A: Yes, see https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Stretch

Q: Can I upgrade Proxmox VE 5.x to 5.4 with apt?
A: Yes, just via GUI or via CLI with apt update && apt dist-upgrade

Q: Can I upgrade Proxmox VE 4.x to 5.4 with apt dist-upgrade?
A: Yes, see https://pve.proxmox.com/wiki/Upgrade_from_4.x_to_5.0. If you run Ceph on V4.x please also check https://pve.proxmox.com/wiki/Ceph_Jewel_to_Luminous. Please note, Proxmox VE 4.x is already end of support since June 2018, see Proxmox VE Support Lifecycle

Many THANKS to our active community for all your feedback, testing, bug reporting and patch submitting!

__________________
Best regards,

Martin Maurer
Proxmox VE project leader
 
btrfs does currently not perform in a way that we want to add it to Proxmox VE users. But you can use btrfs already, just confgure it manually.
 
  • Like
Reactions: EuroDomenii
No one here, or from the Community (AFAIK), is actively working on it, so postponed means here not real timeline, and if no one steps up it's unlikely for 6.0 I'd guess.
Some main issues are, fsync is _really_ slow, about an order of magnitude worse as ZFS (an FS with a similar feature set), this showed especially in our installer tests, as dpkg per default does an fsync after each file extracted (this could be work-arounded for the installer case, but stay an issue after installation..). RAID 5/6 is still unstable and replacing failed disks is not too ideal yet either. As soon as this is addressed, there probably will be more interest and motivation to support it officially, but until then we just see it not as ready to make it a "Proxmox VE first class FS" with more integration. I mean its included in the kernel so you can already try and use it as a basic underlying storage.
 
Congrats Proxmox team. Lots of great features and fixes in the changelog. :)
 
Do you plan to handle separate swap limit accounting using cgroupv2? LXC has cgroupv2 support since version 3.
Currently i can't use proxmox to configure LXC with swap limit smaller than total ram limit. Swap limit is now always "Memory+Swap"
 
please post your sources.list file, seems there is something wrong there.

your pveversion -v?
 
please post your sources.list file, seems there is something wrong there.

your pveversion -v?
It looks like it had already done the update. My load balancer had just connected me to a node in the cluster that the update had not yet been performed on.
 
some hints about vm hibernate? Specific version of qemu guest agent? I tried some minutes ago but it fails...

log display:


Code:
Apr 11 18:38:42  pvedaemon[6288]: suspend VM 133: UPID::00001890:08F90DCB:5CAF6D92:qmsuspend:133:root@pam:
Apr 11 18:38:44  multipathd[662]: sdf: spurious uevent, path not found
Apr 11 18:38:53  pvedaemon[40726]: VM 133 qmp command failed - VM 133 qmp command 'guest-ping' failed - got timeout
Apr 11 18:39:00  systemd[1]: Starting Proxmox VE replication runner...
Apr 11 18:39:00  systemd[1]: Started Proxmox VE replication runner.
Apr 11 18:39:02  pveproxy[40745]: worker exit
Apr 11 18:39:02  pveproxy[2803]: worker 40745 finished
Apr 11 18:39:02  pveproxy[2803]: starting 1 worker(s)
Apr 11 18:39:02  pveproxy[2803]: worker 6427 started
Apr 11 18:39:12  pvedaemon[40727]: VM 133 qmp command failed - VM 133 qmp command 'guest-ping' failed - got timeout
Apr 11 18:39:31  pvedaemon[40726]: VM 133 qmp command failed - VM 133 qmp command 'guest-ping' failed - got timeout

multipathd[662]: sdf: spurious uevent, path not found

is strange because for that isci resource there isn't multipath...

qemu ping from command line works fine
 
...

is strange because for that isci resource there isn't multipath...

qemu ping from command line works fine

Please post all details about your system setup, but please in a NEW thread with a suitable thread topic.
 
If you find yourself with nonfunctional SSH after the update, please make sure you have libcomerr2 still installed

/usr/sbin/sshd: error while loading shared libraries: libcom_err.so.2: cannot open shared object file: No such file or directory

Had to reinstall it (apt install --reinstall) because after apt update, apt upgrade, apt dist-upgrade, my system still thought it has the newest version, but the lib was gone.

# apt-file search /lib/x86_64-linux-gnu/libcom_err.so.2
libcomerr2: /lib/x86_64-linux-gnu/libcom_err.so.2
libcomerr2: /lib/x86_64-linux-gnu/libcom_err.so.2.1
 
As openssh-server has a Depends on "libcomerr2" this really shouldn't happen.. We also did not changed anything on that, or the SSH package (which come from Debian Upstream)..

Can you check (and post) your /var/log/apt/history.log file to see what apt decided to do on the point of upgrading.

And remember, _never_ use apt(-get) upgrade, always dist-upgrade (or apt full-upgrade, which is an alias to dist-upgrade by the newer apt). Simply using "pveupgrade" saves remembering this too.
 
  • Like
Reactions: hanru
Start-Date: 2019-04-12 11:06:34
Commandline: apt dist-upgrade
Install: libpython3.5:amd64 (3.5.3-1+deb9u1, automatic), python3.5:amd64 (3.5.3-1+deb9u1, automatic), python3.5-minimal:amd64 (3.5.3-1+deb9u1, automatic), python3.5-dev:a
md64 (3.5.3-1+deb9u1, automatic), libpython3.5-dev:amd64 (3.5.3-1+deb9u1, automatic), libpython3.5-stdlib:amd64 (3.5.3-1+deb9u1, automatic), libpython3.5-minimal:amd64 (3
.5.3-1+deb9u1, automatic), libegl1-glvnd-nvidia:amd64 (390.87-8~deb9u1, automatic)
Downgrade: libmpc3:amd64 (1.1.0-1, 1.0.3-1+b2), debconf:amd64 (1.5.71, 1.5.61), lib32ubsan0:amd64 (7.4.0-6, 6.3.0-18+deb9u1), libmpx2:amd64 (8.3.0-2, 6.3.0-18+deb9u1), python3-dev:amd64 (3.7.2-1, 3.5.3-1), libcomerr2:amd64 (1.44.5-1, 1.43.4-2),....

So dist-upgrade did downgrade a lot of things, among others libcomerr2

looking at my sshd:


# ldd /usr/sbin/sshd
linux-vdso.so.1 (0x00007ffc3b338000)
libwrap.so.0 => /lib/x86_64-linux-gnu/libwrap.so.0 (0x00007f9d69eaa000)
libaudit.so.1 => /lib/x86_64-linux-gnu/libaudit.so.1 (0x00007f9d69c82000)
libpam.so.0 => /lib/x86_64-linux-gnu/libpam.so.0 (0x00007f9d69a74000)
libselinux.so.1 => /lib/x86_64-linux-gnu/libselinux.so.1 (0x00007f9d6984c000)
libsystemd.so.0 => /lib/x86_64-linux-gnu/libsystemd.so.0 (0x00007f9d6a4ff000)
libcrypto.so.1.0.2 => /usr/lib/x86_64-linux-gnu/libcrypto.so.1.0.2 (0x00007f9d693e6000)
libutil.so.1 => /lib/x86_64-linux-gnu/libutil.so.1 (0x00007f9d691e3000)
libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x00007f9d68fc9000)
libcrypt.so.1 => /lib/x86_64-linux-gnu/libcrypt.so.1 (0x00007f9d68d91000)
libgssapi_krb5.so.2 => /usr/lib/x86_64-linux-gnu/libgssapi_krb5.so.2 (0x00007f9d68b46000)
libkrb5.so.3 => /usr/lib/x86_64-linux-gnu/libkrb5.so.3 (0x00007f9d6886c000)
libcom_err.so.2 => /lib/x86_64-linux-gnu/libcom_err.so.2 (0x00007f9d68668000) <----- *ding* *ding*
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f9d682c9000)
libnsl.so.1 => /lib/x86_64-linux-gnu/libnsl.so.1 (0x00007f9d680b1000)
libcap-ng.so.0 => /lib/x86_64-linux-gnu/libcap-ng.so.0 (0x00007f9d67eab000)
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f9d67ca7000)
libpcre.so.3 => /lib/x86_64-linux-gnu/libpcre.so.3 (0x00007f9d67a34000)
/lib64/ld-linux-x86-64.so.2 (0x00007f9d6a37e000)
librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007f9d6782c000)
liblzma.so.5 => /lib/x86_64-linux-gnu/liblzma.so.5 (0x00007f9d67606000)
liblz4.so.1 => /usr/lib/x86_64-linux-gnu/liblz4.so.1 (0x00007f9d673f4000)
libgcrypt.so.20 => /lib/x86_64-linux-gnu/libgcrypt.so.20 (0x00007f9d670e4000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f9d66ec7000)
libk5crypto.so.3 => /usr/lib/x86_64-linux-gnu/libk5crypto.so.3 (0x00007f9d66c94000)
libkrb5support.so.0 => /usr/lib/x86_64-linux-gnu/libkrb5support.so.0 (0x00007f9d66a88000)
libkeyutils.so.1 => /lib/x86_64-linux-gnu/libkeyutils.so.1 (0x00007f9d66884000)
libresolv.so.2 => /lib/x86_64-linux-gnu/libresolv.so.2 (0x00007f9d6666d000)
libgpg-error.so.0 => /lib/x86_64-linux-gnu/libgpg-error.so.0 (0x00007f9d66459000)

And hopefully that version is correct

ii openssh-client 1:7.4p1-10+deb9u6 amd64 secure shell (SSH) client, for secure access to remote machines
ii openssh-server 1:7.4p1-10+deb9u6 amd64 secure shell (SSH) server, for secure access from remote machines
ii openssh-sftp-server 1:7.4p1-10+deb9u6 amd64 secure shell (SSH) sftp server module, for SFTP access from remote machines


I made sure /usr/sbin/sshd is the file provided by openssh-server
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!