Proxmox VE 5.4 released!

@MoxProxxer , based on your other threads you mixed up a lot, e.g. you got Debian Stretch and updated to Buster and then back to Stretch - all this is not really possible and the cause of your issues.

I highly recommend a clean Strech based installation without any Buster packages.

And please open a new thread as your issues are not related to the new release announcements.
 
Downgrade: libmpc3:amd64 (1.1.0-1, 1.0.3-1+b2), debconf:amd64 (1.5.71, 1.5.61), lib32ubsan0:amd64 (7.4.0-6, 6.3.0-18+deb9u1), libmpx2:amd64 (8.3.0-2, 6.3.0-18+deb9u1), python3-dev:amd64 (3.7.2-1, 3.5.3-1), libcomerr2:amd64 (1.44.5-1, 1.43.4-2),....

Downgrade?? Your installation seems to be a bit off in general? Have you by any chance a newer Debian (testing/buster) release already installed?
Your system package state seems at least like it, and if this is a production system I can only recommend to install a clean PVE, at best from our ISO, if you run a, so called "Franken Debian", you're running a ticking time bomb..
 
We are very pleased to announce the general availability of Proxmox VE 5.4.

Built on Debian 9.8 (Stretch) and a specially modified Linux Kernel 4.15, this version of Proxmox VE introduces a new wizard for installing Ceph storage via the user interface, and brings enhanced flexibility with HA clustering, hibernation support for virtual machines, and support for Universal Second Factor (U2F) authentication.

The new features of Proxmox VE 5.4 focus on usability and simple management of the software-defined infrastructure as well as on security management.

Countless bugfixes and more smaller improvements are listed in the release notes.

Release notes
https://pve.proxmox.com/wiki/Roadmap#Proxmox_VE_5.4

Video tutorial
What's new in Proxmox VE 5.4?

Download
https://www.proxmox.com/en/downloads
Alternate ISO download:
http://download.proxmox.com/iso/

Dokumentation
https://pve.proxmox.com/pve-docs/

Source Code
https://git.proxmox.com

Bugtracker
https://bugzilla.proxmox.com

FAQ
Q: Can I install Proxmox VE 5.4 on top of Debian Stretch?
A: Yes, see https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Stretch

Q: Can I upgrade Proxmox VE 5.x to 5.4 with apt?
A: Yes, just via GUI or via CLI with apt update && apt dist-upgrade

Q: Can I upgrade Proxmox VE 4.x to 5.4 with apt dist-upgrade?
A: Yes, see https://pve.proxmox.com/wiki/Upgrade_from_4.x_to_5.0. If you run Ceph on V4.x please also check https://pve.proxmox.com/wiki/Ceph_Jewel_to_Luminous. Please note, Proxmox VE 4.x is already end of support since June 2018, see Proxmox VE Support Lifecycle

Many THANKS to our active community for all your feedback, testing, bug reporting and patch submitting!

__________________
Best regards,

Martin Maurer
Proxmox VE project leader


Thanks to the Proxmox Team. !!!


Questions: Is there a plan for an improved usability of the ZFS Storage plugin. We are waiting for a option ZFS dataset creation and snapshot management by this nice GUI plugin.

Is this maybe part of Proxmox VE version 6.0 incl. ZFS 0.8 with trim and encryption features?
 
I am very happy to hear that you have released a new version. However, what I want most is that you support CEPH version of Mimic not yet available. I look forward to supporting this release soon.
 
I am very happy to hear that you have released a new version. However, what I want most is that you support CEPH version of Mimic not yet available. I look forward to supporting this release soon.

1. An upgrade from existing Luminous cluster to Mimic Nautilus is possible, but needs a bit of work, we're onto it.
2. As the Ceph Mimic Nautilus release uses new C++ features only available in quite new compilers, which is not the case for those supported by Debian Stretch (gcc 6). As the compiler and it's libc are still the fundamental base of any modern Linux Distribution this is not easily done by cross compiling, or backporting new compilers, one is guaranteed to run into troubles there, which for a tech used to provided a highly available and reliable storage backing like ceph is a no-go for us.

But the next major release, Proxmox VE 6, based upon Debian Buster, has this required support for those new shiny C++ Language features, and we can and will provide a clean and stable solution with Ceph Mimic Nautilus there.

Edit: I actually talked about Ceph Nautilus planned for 6.0, sorry.
 
Last edited:
1. An upgrade from existing Luminous cluster to Mimic is possible, but needs a bit of work, we're onto it.
2. As the Ceph Mimic release uses new C++ features only available in quite new compilers, which is not the case for those supported by Debian Stretch (gcc 6). As the compiler and it's libc are still the fundamental base of any modern Linux Distribution this is not easily done by cross compiling, or backporting new compilers, one is guaranteed to run into troubles there, which for a tech used to provided a highly available and reliable storage backing like ceph is a no-go for us.

But the next major release, Proxmox VE 6, based upon Debian Buster, has this required support for those new shiny C++ Language features, and we can and will provide a clean and stable solution with Ceph Mimic there.

Thanks for your answer.
Can you tell me, when is version 6.0 released?
 
Can you tell me, when is version 6.0 released?

No, I'm afraid, I cannot give any date.. I mean Buster needs to come out first, I'd guess :-)
But I would not wait anything out for new setups, upgrades will be possible and Proxmox VE 5.4 with Luminous is, IMO, a very good release.
 
Just wanted to say thank you and good job!
Looking forward to test: "Suspend to disk/hibernate support for Qemu/KVM guests" :-)
I would allow me to do upgrades without live migration. :-)
 
Hi, thanks for new release!

Unfortunately, after upgrade to 5.4 from 5.3 there is an issue with accessing options of CD-ROM.
Double clicking on cdrom doesn't trigger .iso selection modal/popup.
After downgrading pve-manager package to 5.3-12 it's again possible.
 
Hi, thanks for new release!

Unfortunately, after upgrade to 5.4 from 5.3 there is an issue with accessing options of CD-ROM.
Double clicking on cdrom doesn't trigger .iso selection modal/popup.
After downgrading pve-manager package to 5.3-12 it's again possible.

The CD-ROM edit window from VMs work here just fine in 5.4, just re-tested with Firefox 66 and Chromium 73, can you retry and ensure that the browser cache is cleared?
If it then still is an issue please open a new threads with additional infos like your used browser and its version, `pveversion -v` output, do you have a touch screen, all info that could be possibly relevant
 
2. As the Ceph Mimic Nautilus release uses new C++ features only available in quite new compilers, which is not the case for those supported by Debian Stretch (gcc 6). As the compiler and it's libc are still the fundamental base of any modern Linux Distribution this is not easily done by cross compiling, or backporting new compilers, one is guaranteed to run into troubles there, which for a tech used to provided a highly available and reliable storage backing like ceph is a no-go for us.

I see it a bit different, because Croit (a great CEPH Deployment Tool) have this already and there was no trouble to use CEPH Mimic on Debian.
Release Notes: https://croit.io/2018/09/23/2018-09-23-debian-mirror

Many Users of the HCI Cluster are a little mad. A colleague is already thinking about pass through all SSDs to a Single VM to have the ability to run the newest CEPH Version and have not wait on you guys. Yes PVE is a great Tool and you right if you say it must ran stable - but other Companys get this already done and i thin Croit has many big Customer.
Nautilus is now out and our Cluster runs on Luminous... Thats now two Version between and thats not really nice.

For example (new in Nautilus): "Embedded Grafana Dashboards (derived from Ceph Metrics)" - i spent the last 3 days to pull some Metrics from the Cluster to Graphite and create multiple Dashboards.
 
  • Like
Reactions: DerDanilo
Thank you for the 5.4 release and your ongoing efforts to improve the usability of ceph as deployed by Proxmox. Can anyone comment on any improvements planned for the support of rbd-mirror configurations?

Thanks,
K
 
I see it a bit different, because Croit (a great CEPH Deployment Tool) have this already and there was no trouble to use CEPH Mimic on Debian.
Release Notes: https://croit.io/2018/09/23/2018-09-23-debian-mirror

Many Users of the HCI Cluster are a little mad. A colleague is already thinking about pass through all SSDs to a Single VM to have the ability to run the newest CEPH Version and have not wait on you guys. Yes PVE is a great Tool and you right if you say it must ran stable - but other Companys get this already done and i thin Croit has many big Customer.
Nautilus is now out and our Cluster runs on Luminous... Thats now two Version between and thats not really nice.

For example (new in Nautilus): "Embedded Grafana Dashboards (derived from Ceph Metrics)" - i spent the last 3 days to pull some Metrics from the Cluster to Graphite and create multiple Dashboards.

see the last paragraph on that linked page for why this is not an option for PVE:

The new build depends on a newer version of libc than is shipped by default with Debian. Upgrading Ceph from our repository will also upgrade libc.

We have validated the compatibility with our image only, use at your own risk.

just switching out libc might be possible for a single-use installation with a very static and small set of installed software (i.e., only Ceph), but it is not for PVE (which aims to remain compatible with Debian packages from the underlying base release).
 
BTW why there's still qemu 2 in proxmox? there is already qemu 4.0.0...

as we prefer stability over bleeding edge and its unavoidable possible bugs of new regressions, which is far from ideal in production.
That said, we packaged 3.0.1 a bit ago, it's currently available through our pvetest repository.
Further, one needs to note that qemu had two big version bumps lately in under a year, while the 2.X series was release in a span of multiple years, one shouldn't be to fixated on version numbers, IMO :)
 
The CD-ROM edit window from VMs work here just fine in 5.4, just re-tested with Firefox 66 and Chromium 73, can you retry and ensure that the browser cache is cleared?
If it then still is an issue please open a new threads with additional infos like your used browser and its version, `pveversion -v` output, do you have a touch screen, all info that could be possibly relevant

This problem still persists for me.
My users and I, tested in on different browsers (Chrome, Chromium, Firefox) on different OS (Ubuntu 18.04, Win 10).

Please check if you are able to reproduce it by creating new user with only "PVEVMUser" permission.
For user with minimal set of permissions, "Edit" button it disabled for virtual CD-ROM.
For user with admin permissions there is no such problem.

Maybe this issue in bugtracker is related?
https://bugzilla.proxmox.com/show_bug.cgi?id=2197
 
Last edited:
  • Like
Reactions: snowpoke
Hi,

Installed latest version with updates.

Server crashes every 48hrs or so also displaying the Detected Hardware Unit Hang on Intel Corporation Ethernet Connection (2) I219-LM (rev 31)

Please advise

Further details below.

Linux scrypt 4.15.18-14-pve #1 SMP PVE 4.15.18-39 (Wed, 15 May 2019 06:56:23 +0200) x86_64 GNU/Linux

proxmox-ve: 5.4-1 (running kernel: 4.15.18-14-pve)
pve-manager: 5.4-5 (running version: 5.4-5/c6fdb264)
pve-kernel-4.15: 5.4-2
pve-kernel-4.15.18-14-pve: 4.15.18-39
pve-kernel-4.15.18-13-pve: 4.15.18-37
corosync: 2.4.4-pve1
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: not correctly installed
libjs-extjs: 6.0.1-2
libpve-access-control: 5.1-9
libpve-apiclient-perl: 2.0-5
libpve-common-perl: 5.0-51
libpve-guest-common-perl: 2.0-20
libpve-http-server-perl: 2.0-13
libpve-storage-perl: 5.0-42
libqb0: 1.0.3-1~bpo9
lvm2: 2.02.168-pve6
lxc-pve: 3.1.0-3
lxcfs: 3.0.3-pve1
novnc-pve: 1.0.0-3
proxmox-widget-toolkit: 1.0-26
pve-cluster: 5.0-37
pve-container: 2.0-37
pve-docs: 5.4-2
pve-edk2-firmware: 1.20190312-1
pve-firewall: 3.0-20
pve-firmware: 2.0-6
pve-ha-manager: 2.0-9
pve-i18n: 1.1-4
pve-libspice-server1: 0.14.1-2
pve-qemu-kvm: 3.0.1-2
pve-xtermjs: 3.12.0-1
qemu-server: 5.0-51
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3

00:1f.6 Ethernet controller: Intel Corporation Ethernet Connection (2) I219-LM (rev 31)
Subsystem: Fujitsu Technology Solutions Ethernet Connection (2) I219-LM
Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr+ Stepping- SERR+ FastB2B- DisINTx+
Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
Latency: 0
Interrupt: pin A routed to IRQ 142
Region 0: Memory at ef200000 (32-bit, non-prefetchable) [size=128K]
Capabilities: [c8] Power Management version 3
Flags: PMEClk- DSI+ D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold+)
Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=1 PME-
Capabilities: [d0] MSI: Enable+ Count=1/1 Maskable- 64bit+
Address: 00000000fee00498 Data: 0000
Capabilities: [e0] PCI Advanced Features
AFCap: TP+ FLR+
AFCtrl: FLR-
AFStatus: TP-
Kernel driver in use: e1000e
Kernel modules: e1000e

May 22 06:34:15 scrypt kernel: [101650.550599] e1000e 0000:00:1f.6 enp0s31f6: Detected Hardware Unit Hang:
May 22 06:34:15 scrypt kernel: [101650.550599] TDH <6d>
May 22 06:34:15 scrypt kernel: [101650.550599] TDT <80>
May 22 06:34:15 scrypt kernel: [101650.550599] next_to_use <80>
May 22 06:34:15 scrypt kernel: [101650.550599] next_to_clean <6c>
May 22 06:34:15 scrypt kernel: [101650.550599] buffer_info[next_to_clean]:
May 22 06:34:15 scrypt kernel: [101650.550599] time_stamp <101829e46>
May 22 06:34:15 scrypt kernel: [101650.550599] next_to_watch <6d>
May 22 06:34:15 scrypt kernel: [101650.550599] jiffies <101829f60>
May 22 06:34:15 scrypt kernel: [101650.550599] next_to_watch.status <0>
May 22 06:34:15 scrypt kernel: [101650.550599] MAC Status <80083>
May 22 06:34:15 scrypt kernel: [101650.550599] PHY Status <796d>
May 22 06:34:15 scrypt kernel: [101650.550599] PHY 1000BASE-T Status <7800>
May 22 06:34:15 scrypt kernel: [101650.550599] PHY Extended Status <3000>
May 22 06:34:15 scrypt kernel: [101650.550599] PCI Status <10>
May 22 06:34:17 scrypt kernel: [101652.566531] e1000e 0000:00:1f.6 enp0s31f6: Detected Hardware Unit Hang:
May 22 06:34:17 scrypt kernel: [101652.566531] TDH <6d>
May 22 06:34:17 scrypt kernel: [101652.566531] TDT <80>
May 22 06:34:17 scrypt kernel: [101652.566531] next_to_use <80>
May 22 06:34:17 scrypt kernel: [101652.566531] next_to_clean <6c>
May 22 06:34:17 scrypt kernel: [101652.566531] buffer_info[next_to_clean]:
May 22 06:34:17 scrypt kernel: [101652.566531] time_stamp <101829e46>
May 22 06:34:17 scrypt kernel: [101652.566531] next_to_watch <6d>
May 22 06:34:17 scrypt kernel: [101652.566531] jiffies <10182a158>
May 22 06:34:17 scrypt kernel: [101652.566531] next_to_watch.status <0>
May 22 06:34:17 scrypt kernel: [101652.566531] MAC Status <80083>
May 22 06:34:17 scrypt kernel: [101652.566531] PHY Status <796d>
May 22 06:34:17 scrypt kernel: [101652.566531] PHY 1000BASE-T Status <7800>
May 22 06:34:17 scrypt kernel: [101652.566531] PHY Extended Status <3000>
May 22 06:34:17 scrypt kernel: [101652.566531] PCI Status <10>

Looks like the old problem has returned.
Server tends to last about 48hrs before crashing

Settings for enp0s31f6:
Supported ports: [ TP ]
Supported link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
Supported pause frame use: No
Supports auto-negotiation: Yes
Advertised link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
Advertised pause frame use: No
Advertised auto-negotiation: Yes
Speed: 1000Mb/s
Duplex: Full
Port: Twisted Pair
PHYAD: 1
Transceiver: internal
Auto-negotiation: on
MDI-X: on (auto)
Supports Wake-on: pumbg
Wake-on: g
Current message level: 0x00000007 (7)
drv probe link
Link detected: yes

ethtool -K enp0s31f6 sg off tso off gro off
Cannot get device udp-fragmentation-offload settings: Operation not supported
Cannot get device udp-fragmentation-offload settings: Operation not supported
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!