Proxmox VE 6.1 released!

I upgraded from 6.0 to 6.1 and now my Window VMs are bluescreening all over the place with several different symptoms like "KERNEL SECURITY CHECK FAILURE".

I would like to try and downgrade to 6.0, but "# apt-get install proxmox-ve=6.0-2" gives me proxmox-ve 6.0 and everything else is from 6.1.

Is there a recommended way to install 6.0 with correct dependencies?
 
@Robert Dahlem, can you please open up a new thread. And please post how the VM in question is configured, plus a pveversion -v.
 
ZVOLs are corrupted on SSD TRIM.

# zfs list
NAME USED AVAIL REFER MOUNTPOINT
ssd-storage 3.46G 427G 24K /ssd-storage
ssd-storage/data 3.46G 427G 24K /ssd-storage/data
ssd-storage/data/vm-130-disk-0 3.46G 427G 3.46G -
ssd-storage/data/vm-130-disk-1 15K 427G 15K -
# zpool scrub ssd-storage
# zpool status
pool: ssd-storage
state: ONLINE
scan: scrub repaired 0B in 0 days 00:00:12 with 0 errors on Fri Dec 13 04:22:48 2019
config:

NAME STATE READ WRITE CKSUM
ssd-storage ONLINE 0 0 0
ata-WDC_WDS480G2G0B-00EPW0_19144B800540 ONLINE 0 0 0

errors: No known data errors
# zpool trim ssd-storage
# zpool status
pool: ssd-storage
state: ONLINE
scan: scrub repaired 0B in 0 days 00:00:13 with 0 errors on Fri Dec 13 04:24:01 2019
config:

NAME STATE READ WRITE CKSUM
ssd-storage ONLINE 0 0 0
ata-WDC_WDS480G2G0B-00EPW0_19144B800540 ONLINE 0 0 0 (trimming)

errors: No known data errors


After TRIM was done:
# zpool scrub ssd-storage
# zpool status -v
pool: ssd-storage
state: DEGRADED
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in question if possible. Otherwise restore the
entire pool from backup.
see: http://zfsonlinux.org/msg/ZFS-8000-8A
scan: scrub repaired 7K in 0 days 00:00:13 with 18 errors on Fri Dec 13 04:25:00 2019
config:

NAME STATE READ WRITE CKSUM
ssd-storage DEGRADED 0 0 0
ata-WDC_WDS480G2G0B-00EPW0_19144B800540 DEGRADED 0 0 40 too many errors

errors: Permanent errors have been detected in the following files:

ssd-storage/data/vm-130-disk-0:<0x1>

Don't know if it a proxmox fail or zfsonlinux.
 
No, the problem is persistent and i can reproduce it in 100% cases, i added this disk yesterday. Previously this disk was used primarily in windows and there was no probems with trim. I also rechecked this disk by writing pseudorandom data on the whole disk and readed it back without errors. Also smart is OK.

Seems TRIM in current zfs in unreliable: https://github.com/zfsonlinux/zfs/issues/8835
 
Last edited:
Previously this disk was used primarily in windows and there was no probems with trim.

Yeah, things tend to work before they break :) Just re-tested with various setups to be sure that I did not miss something, but trim simply works as intended here, no data errors caused from that.

Also smart is OK.
smartctl is not an absolute truth, it cannot detect bitrot or something bad the disk itself did not exposed over their disk controller interface, zfs scrub can as it checks all data with their checksum..

It may be well something completely different, but quite a few people are using zfs trim since it's available here, and we have no knowledge about some general issues with it, that's what I want to tell you. So, I'd look for something specific to your setup.
 
No, the problem is persistent and i can reproduce it in 100% cases, i added this disk yesterday. Previously this disk was used primarily in windows and there was no probems with trim. I also rechecked this disk by writing pseudorandom data on the whole disk and readed it back without errors. Also smart is OK.

Seems TRIM in current zfs in unreliable: https://github.com/zfsonlinux/zfs/issues/8835

Are you using a HBA ?
 
Since updated 6.0 to 6.1, hosts almost stop sending stats to external metric server. Old nodes (6.0) sent all metrics, updated are limited number. All in the same cluster, with same settings.
Updated nodes only send "system" metrics, but not all that should (ex. blockstat, nics, balooninfo, ...)
root@kvm-01:~# cat /etc/pve/status.cfg
influxdb:
server hostname
port 25830

proxmox-ve: 6.1-2 (running kernel: 5.3.13-1-pve)
pve-manager: 6.1-5 (running version: 6.1-5/9bf06119)
pve-kernel-5.3: 6.1-1
pve-kernel-helper: 6.1-1
pve-kernel-5.0: 6.0-11
pve-kernel-5.3.13-1-pve: 5.3.13-1
pve-kernel-5.0.21-5-pve: 5.0.21-10
pve-kernel-5.0.21-3-pve: 5.0.21-7
pve-kernel-4.13.13-5-pve: 4.13.13-38
pve-kernel-4.13.8-2-pve: 4.13.8-28
pve-kernel-4.13.4-1-pve: 4.13.4-26
pve-kernel-4.10.15-1-pve: 4.10.15-15
ceph-fuse: 12.2.12-pve1
corosync: 3.0.2-pve4
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.13-pve1
libpve-access-control: 6.0-5
libpve-apiclient-perl: 3.0-2
libpve-common-perl: 6.0-9
libpve-guest-common-perl: 3.0-3
libpve-http-server-perl: 3.0-3
libpve-storage-perl: 6.1-3
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve3
lxc-pve: 3.2.1-1
lxcfs: 3.0.3-pve60
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.1-1
pve-cluster: 6.1-2
pve-container: 3.0-15
pve-docs: 6.1-3
pve-edk2-firmware: 2.20191127-1
pve-firewall: 4.0-9
pve-firmware: 3.0-4
pve-ha-manager: 3.0-8
pve-i18n: 2.0-3
pve-qemu-kvm: 4.1.1-2
pve-xtermjs: 3.13.2-1
qemu-server: 6.1-4
smartmontools: 7.0-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.2-pve2

proxmox-ve: 6.0-2 (running kernel: 5.0.21-3-pve)
pve-manager: 6.0-9 (running version: 6.0-9/508dcee0)
pve-kernel-5.0: 6.0-9
pve-kernel-helper: 6.0-9
pve-kernel-5.0.21-3-pve: 5.0.21-7
pve-kernel-5.0.21-2-pve: 5.0.21-7
pve-kernel-4.13.13-5-pve: 4.13.13-38
pve-kernel-4.13.8-2-pve: 4.13.8-28
pve-kernel-4.10.15-1-pve: 4.10.15-15
ceph-fuse: 12.2.12-pve1
corosync: 3.0.2-pve4
criu: 3.11-3
glusterfs-client: 5.5-3
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.13-pve1
libpve-access-control: 6.0-2
libpve-apiclient-perl: 3.0-2
libpve-common-perl: 6.0-5
libpve-guest-common-perl: 3.0-1
libpve-http-server-perl: 3.0-3
libpve-storage-perl: 6.0-9
libqb0: 1.0.5-1
lvm2: 2.03.02-pve3
lxc-pve: 3.1.0-65
lxcfs: 3.0.3-pve60
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.0-8
pve-cluster: 6.0-7
pve-container: 3.0-7
pve-docs: 6.0-7
pve-edk2-firmware: 2.20190614-1
pve-firewall: 4.0-7
pve-firmware: 3.0-2
pve-ha-manager: 3.0-2
pve-i18n: 2.0-3
pve-qemu-kvm: 4.0.0-7
pve-xtermjs: 3.13.2-1
qemu-server: 6.0-9
smartmontools: 7.0-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.2-pve1
 
Last edited:
Since updated 6.0 to 6.1, hosts almost stop sending stats to external metric server. Old nodes (6.0) sent all metrics, updated are limited number. All in the same cluster, with same settings.
Updated nodes only send "system" metrics, but not all that should (ex. blockstat, nics, balooninfo, ...)

There was some work to accumulate-and-batch-sent the metrics a bit (not each line as a single network packet), this was done especially for the newly TCP support for the graphite plugin (as creating a TCP connection is costly, and we ran into timeouts fast).

It could be a side effect of this change, would fit your working vs. non-working versions at least. Can you please open a report over at https://bugzilla.proxmox.com/ to keep track of this?
 
Might sound like a stupid question :cool:
On a setup with several nodes, we move the VMs from the node being upgraded to another one and so on.
Do we have to stop/start each VM so it can benefit of the new QEMU and everything new or is moving them around enough?
 
hi,

Might sound like a stupid question :cool:
On a setup with several nodes, we move the VMs from the node being upgraded to another one and so on.
Do we have to stop/start each VM so it can benefit of the new QEMU and everything new or is moving them around enough?

live migration should be fine
 
Thank you.

Beside "automatic migration" when server goes down/restarts (orchestrated by HA service), is there a way to migrate several VM at the same time?
 
Beside "automatic migration" when server goes down/restarts (orchestrated by HA service), is there a way to migrate several VM at the same time?

you can use the 'Bulk Migrate' option. set the parallel jobs higher for more parallelism
 
Great.

I had to switch to server view to find it, that was my problem, sorry.
 
There was some work to accumulate-and-batch-sent the metrics a bit (not each line as a single network packet), this was done especially for the newly TCP support for the graphite plugin (as creating a TCP connection is costly, and we ran into timeouts fast).

It could be a side effect of this change, would fit your working vs. non-working versions at least. Can you please open a report over at https://bugzilla.proxmox.com/ to keep track of this?
It was MTU issue in my case.
 
We are very excited to announce the general availability of Proxmox VE 6.1.

It is built on Debian Buster 10.2 and a specially modified Linux Kernel 5.3, QEMU 4.1.1, LXC 3.2, ZFS 0.8.2, Ceph 14.2.4.1 (Nautilus), Corosync 3.0, and more of the current leading open-source virtualization technologies.

This release brings new configuration options available in the GUI which make working with Proxmox VE even more comfortable and secure. Editing the cluster-wide bandwidth limit for traffic types such as migration, backup-restore, clone, etc. is possible via the GUI. If the optional package ifupdown2 of the Debian network interface manager is installed, it’s now possible to change the network configuration and reload it in the Proxmox web interface without a reboot. We have improvements to 2-factor authentication with TOTP and U2F.

The HA stack has been improved and comes with a new 'migrate' shutdown policy, migrating running services to another node on shutdown.

In the storage backend, all features offered by newer kernels with Ceph and KRBD are supported with version 6.1.

We have some notable bug fixes, one of them being the QEMU monitor timeout issue or stability improvements for corosync. Countless other bugfixes and smaller improvements are listed in the release notes.

Release notes
https://pve.proxmox.com/wiki/Roadmap#Proxmox_VE_6.1

Video intro
https://www.proxmox.com/en/training/video-tutorials/item/what-s-new-in-proxmox-ve-6-1

Download
https://www.proxmox.com/en/downloads
Alternate ISO download:
http://download.proxmox.com/iso/

Documentation
https://pve.proxmox.com/pve-docs/

Community Forum
https://forum.proxmox.com

Source Code
https://git.proxmox.com

Bugtracker
https://bugzilla.proxmox.com

FAQ
Q: Can I dist-upgrade Proxmox VE 6.0 to 6.1 with apt?
A: Yes, just via GUI or via CLI with apt update && apt dist-upgrade

Q: Can I install Proxmox VE 6.1 on top of Debian Buster?
A: Yes, see https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Buster

Q: Can I upgrade my Proxmox VE 5.4 cluster with Ceph Luminous to 6.x and higher with Ceph Nautilus?
A: This is a two step process. First, you have to upgrade Proxmox VE from 5.4 to 6.0, and afterwards upgrade Ceph from Luminous to Nautilus. There are a lot of improvements and changes, please follow exactly the upgrade documentation.
https://pve.proxmox.com/wiki/Upgrade_from_5.x_to_6.0
https://pve.proxmox.com/wiki/Ceph_Luminous_to_Nautilus

Q: Where can I get more information about future feature updates?
A: Check our roadmap, forum, mailing list and subscribe to our newsletter.

A big THANK YOU to our active community for all your feedback, testing, bug reporting and patch submitting!

__________________
Best regards,

Martin Maurer
Proxmox VE project leader
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!