Proxmox VE 6.0 released!

t.lamprecht

Proxmox Staff Member
Staff member
Jul 28, 2015
1,500
212
63
South Tyrol/Italy
I followed the steps here and whilst the pve5to6 checklist says that I am on Proxmox 6, it still shows 5.4.11 in the WebUI. Is this a bug?

https://pve.proxmox.com/wiki/Upgrade_from_5.x_to_6.0

Am I missing something? It is also showing 41 packages that can be upgraded under "origin: Proxmox", but if I run apt dist-upgrade, it doesn't seem to upgrade any of these packages.
Did your (force) reload the browser tab? If that does not solves it please post your
Code:
pveversion -v
output.
 

sdet00

New Member
Nov 18, 2017
21
1
3
Yes, I pressed CTRL + F5. Here's a screenshot: https://i.imgur.com/iddFBzA.png

Here's a paste of the output:

Code:
root@pve01:~# pveversion -v
proxmox-ve: 6.0-2 (running kernel: 5.0.15-1-pve)
pve-manager: 5.4-11 (running version: 5.4-11/6df3d8d0)
pve-kernel-5.0: 6.0-5
pve-kernel-helper: 6.0-5
pve-kernel-4.15: 5.4-6
pve-kernel-5.0.15-1-pve: 5.0.15-1
pve-kernel-4.15.18-18-pve: 4.15.18-44
pve-kernel-4.15.18-15-pve: 4.15.18-40
pve-kernel-4.15.18-14-pve: 4.15.18-39
pve-kernel-4.15.18-9-pve: 4.15.18-30
pve-kernel-4.15.18-8-pve: 4.15.18-28
pve-kernel-4.15.18-7-pve: 4.15.18-27
pve-kernel-4.15.18-5-pve: 4.15.18-24
pve-kernel-4.15.17-1-pve: 4.15.17-9
corosync: 3.0.2-pve2
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libpve-access-control: 6.0-2
libpve-apiclient-perl: 3.0-2
libpve-common-perl: 6.0-2
libpve-guest-common-perl: 3.0-1
libpve-http-server-perl: 2.0-14
libpve-storage-perl: 6.0-5
libqb0: 1.0.5-1~bpo9+2
lvm2: 2.02.168-pve6
lxc-pve: 3.1.0-3
lxcfs: 3.0.3-pve1
novnc-pve: 1.0.0-60
proxmox-widget-toolkit: 2.0-5
pve-cluster: 5.0-37
pve-container: 3.0-3
pve-docs: 6.0-4
pve-edk2-firmware: 2.20190614-1
pve-firewall: 3.0-22
pve-firmware: 3.0-2
pve-ha-manager: 3.0-2
pve-i18n: 2.0-2
pve-libspice-server1: 0.14.1-2
pve-qemu-kvm: 3.0.1-4
pve-xtermjs: 3.13.2-1
qemu-server: 5.0-54
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
zfsutils-linux: 0.7.13-pve1~bpo2
 

Dutch2005

New Member
Jun 10, 2017
10
2
3
34
Yes, I pressed CTRL + F5. Here's a screenshot: https://i.imgur.com/iddFBzA.png

Here's a paste of the output:

Code:
root@pve01:~# pveversion -v
proxmox-ve: 6.0-2 (running kernel: 5.0.15-1-pve)
pve-manager: 5.4-11 (running version: 5.4-11/6df3d8d0)
pve-kernel-5.0: 6.0-5
pve-kernel-helper: 6.0-5
pve-kernel-4.15: 5.4-6
pve-kernel-5.0.15-1-pve: 5.0.15-1
pve-kernel-4.15.18-18-pve: 4.15.18-44
pve-kernel-4.15.18-15-pve: 4.15.18-40
pve-kernel-4.15.18-14-pve: 4.15.18-39
pve-kernel-4.15.18-9-pve: 4.15.18-30
pve-kernel-4.15.18-8-pve: 4.15.18-28
pve-kernel-4.15.18-7-pve: 4.15.18-27
pve-kernel-4.15.18-5-pve: 4.15.18-24
pve-kernel-4.15.17-1-pve: 4.15.17-9
corosync: 3.0.2-pve2
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libpve-access-control: 6.0-2
libpve-apiclient-perl: 3.0-2
libpve-common-perl: 6.0-2
libpve-guest-common-perl: 3.0-1
libpve-http-server-perl: 2.0-14
libpve-storage-perl: 6.0-5
libqb0: 1.0.5-1~bpo9+2
lvm2: 2.02.168-pve6
lxc-pve: 3.1.0-3
lxcfs: 3.0.3-pve1
novnc-pve: 1.0.0-60
proxmox-widget-toolkit: 2.0-5
pve-cluster: 5.0-37
pve-container: 3.0-3
pve-docs: 6.0-4
pve-edk2-firmware: 2.20190614-1
pve-firewall: 3.0-22
pve-firmware: 3.0-2
pve-ha-manager: 3.0-2
pve-i18n: 2.0-2
pve-libspice-server1: 0.14.1-2
pve-qemu-kvm: 3.0.1-4
pve-xtermjs: 3.13.2-1
qemu-server: 5.0-54
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
zfsutils-linux: 0.7.13-pve1~bpo2
Seems some packages are not updated

- zfs utils are not updates (its 0.8.1 now, not 0.7.13)
- PVE-manager is not updated

See my server :

Code:
proxmox-ve: 6.0-2 (running kernel: 5.0.15-1-pve)
pve-manager: 6.0-4 (running version: 6.0-4/2a719255)
pve-kernel-5.0: 6.0-5
pve-kernel-helper: 6.0-5
pve-kernel-4.15: 5.4-6
pve-kernel-5.0.15-1-pve: 5.0.15-1
pve-kernel-4.15.18-18-pve: 4.15.18-44
pve-kernel-4.15.18-12-pve: 4.15.18-36
ceph-fuse: 12.2.12-pve1
corosync: 3.0.2-pve2
criu: 3.11-3
glusterfs-client: 5.5-3
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.10-pve1
libpve-access-control: 6.0-2
libpve-apiclient-perl: 3.0-2
libpve-common-perl: 6.0-2
libpve-guest-common-perl: 3.0-1
libpve-http-server-perl: 3.0-2
libpve-storage-perl: 6.0-5
libqb0: 1.0.5-1
lvm2: 2.03.02-pve3
lxc-pve: 3.1.0-61
lxcfs: 3.0.3-pve60
novnc-pve: 1.0.0-60
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.0-5
pve-cluster: 6.0-4
pve-container: 3.0-3
pve-docs: 6.0-4
pve-edk2-firmware: 2.20190614-1
pve-firewall: 4.0-5
pve-firmware: 3.0-2
pve-ha-manager: 3.0-2
pve-i18n: 2.0-2
pve-qemu-kvm: 4.0.0-3
pve-xtermjs: 3.13.2-1
qemu-server: 6.0-5
smartmontools: 7.0-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.1-pve1
 

Davyd

New Member
Apr 8, 2016
21
3
3
35
Did anyone tryed to use zfs encryption? I have installed test environment (zfs raid 10 with 4 HDD), then created encrypted dataset.
zfs create -s -V 104857600k -o encryption=on -o keylocation=prompt -o keyformat=passphrase rpool/data/vm-100-disk-0
Then rebooted server and system wont boot.
Before this creation system was rebooted successfully multiple times.
 

Attachments

t.lamprecht

Proxmox Staff Member
Staff member
Jul 28, 2015
1,500
212
63
South Tyrol/Italy
Yes, I pressed CTRL + F5. Here's a screenshot: https://i.imgur.com/iddFBzA.png

Here's a paste of the output:

Code:
root@pve01:~# pveversion -v
proxmox-ve: 6.0-2 (running kernel: 5.0.15-1-pve)
pve-manager: 5.4-11 (running version: 5.4-11/6df3d8d0)
pve-kernel-5.0: 6.0-5
pve-kernel-helper: 6.0-5
pve-kernel-4.15: 5.4-6
pve-kernel-5.0.15-1-pve: 5.0.15-1
pve-kernel-4.15.18-18-pve: 4.15.18-44
pve-kernel-4.15.18-15-pve: 4.15.18-40
pve-kernel-4.15.18-14-pve: 4.15.18-39
pve-kernel-4.15.18-9-pve: 4.15.18-30
pve-kernel-4.15.18-8-pve: 4.15.18-28
pve-kernel-4.15.18-7-pve: 4.15.18-27
pve-kernel-4.15.18-5-pve: 4.15.18-24
pve-kernel-4.15.17-1-pve: 4.15.17-9
corosync: 3.0.2-pve2
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libpve-access-control: 6.0-2
libpve-apiclient-perl: 3.0-2
libpve-common-perl: 6.0-2
libpve-guest-common-perl: 3.0-1
libpve-http-server-perl: 2.0-14
libpve-storage-perl: 6.0-5
libqb0: 1.0.5-1~bpo9+2
lvm2: 2.02.168-pve6
lxc-pve: 3.1.0-3
lxcfs: 3.0.3-pve1
novnc-pve: 1.0.0-60
proxmox-widget-toolkit: 2.0-5
pve-cluster: 5.0-37
pve-container: 3.0-3
pve-docs: 6.0-4
pve-edk2-firmware: 2.20190614-1
pve-firewall: 3.0-22
pve-firmware: 3.0-2
pve-ha-manager: 3.0-2
pve-i18n: 2.0-2
pve-libspice-server1: 0.14.1-2
pve-qemu-kvm: 3.0.1-4
pve-xtermjs: 3.13.2-1
qemu-server: 5.0-54
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
zfsutils-linux: 0.7.13-pve1~bpo2
How did you get into the situation where you upgraded "proxmox-ve" but not the other packages?

Re-check your repositories (i.e., is a valid and accessible Proxmox repository configured) and recheck the upgrade how-to.
 
Dec 15, 2016
54
2
8
55
Berlin
this and similar issues are the main reason we switched to a non-Grub UEFI bootloader setup with 6.0 - the Grub ZFS implementation is nearly unmaintained and severely behind the actual ZFS on Linux code base (it is basically a read-only parser for the on-disk ZFS data structures from a couple of years ago), with some as-yet-unfixed but hard/impossible to reproduce bugs that can lead to unbootable pools.. all the writes from the dist-upgrade probably made some on-disk structure line up in exactly one of those ways that Grub chokes on it. you can try to randomly copy/move/.. the kernel and initrd files in /boot around in the hopes of them being rewritten in a way that Grub "likes" again..

but the sensible way forward if you still have free space (or even ESP partitions, like if the server was setup with 5.4) on your vdev disks is to use "pve-efiboot-tool" to opt into the new bootloader setup. if that is not an option, you likely need to setup some sort of extra boot device that is not on ZFS, or redo the new bootloader setup by backup - reinstall - restore. we tried hard to investigate and fix these issues within Grub (I lost track of the number of hours I spent digging through Grub debug output via serial console sometime last year, and can personally attest that there are many many more fun ways to spend your time ;)), but in the end it is sometimes easier to cut your losses and start from scratch. as an intermediate solution / quick fix to get your system booted again, consider moving or copying your /boot partition to some external medium like a high-quality usb disk, or a spare disk if you have one.
Hi Fabian,

Thank you for your response. Unfortunately, all of our servers are HP ProLiant DL360 Gen8 or HP Microserver Gen8 and do not support UEFI boot. I helped myself by installing a separate small boot SSD on the system that wouldn't boot anymore but I am concerned about upgrading one of our (PVE licensed) servers which has a this setup:

Code:
zpool list -v
NAME   SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
rpool  3.47T  2.63T   856G         -    58%    75%  1.00x  ONLINE  -
  mirror  1.73T  1.32T   428G         -    58%    75%
    sda3      -      -      -         -      -      -
    sdb3      -      -      -         -      -      -
  mirror  1.73T  1.32T   428G         -    58%    75%
    sdc      -      -      -         -      -      -
    sdd      -      -      -         -      -      -
This is similar setup to the one I have upgraded and that wouldn't boot anymore... To backup and restore 3,47TB is not a fun job...
Any other options to reduce the risk?

Any help is appreciated...
Ralf.
 

fabian

Proxmox Staff Member
Staff member
Jan 7, 2016
3,468
540
113
Hi Fabian,

Thank you for your response. Unfortunately, all of our servers are HP ProLiant DL360 Gen8 or HP Microserver Gen8 and do not support UEFI boot. I helped myself by installing a separate small boot SSD on the system that wouldn't boot anymore but I am concerned about upgrading one of our (PVE licensed) servers which has a this setup:

Code:
zpool list -v
NAME   SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
rpool  3.47T  2.63T   856G         -    58%    75%  1.00x  ONLINE  -
  mirror  1.73T  1.32T   428G         -    58%    75%
    sda3      -      -      -         -      -      -
    sdb3      -      -      -         -      -      -
  mirror  1.73T  1.32T   428G         -    58%    75%
    sdc      -      -      -         -      -      -
    sdd      -      -      -         -      -      -
This is similar setup to the one I have upgraded and that wouldn't boot anymore... To backup and restore 3,47TB is not a fun job...
Any other options to reduce the risk?

Any help is appreciated...
Ralf.
what other partitions are there on sda and sdb? if you have a few hundred MB of space reserved for an ESP, you could put /boot there instead (e.g. by formatting with ext and manually copying the current /boot there). you'd lose redundancy for /boot of course, so if you lose that disk, you need to manually boot into a live environment, import the rpool, repeat the /boot separation on the second or replacement disk, modify /etc/fstab accordingly, re-install the kernel package(s) to fill /boot again, and only then can you reboot the host into PVE proper). the same applies to adding an extra disk/device for /boot.
 
Dec 15, 2016
54
2
8
55
Berlin
what other partitions are there on sda and sdb? if you have a few hundred MB of space reserved for an ESP, you could put /boot there instead (e.g. by formatting with ext and manually copying the current /boot there). you'd lose redundancy for /boot of course, so if you lose that disk, you need to manually boot into a live environment, import the rpool, repeat the /boot separation on the second or replacement disk, modify /etc/fstab accordingly, re-install the kernel package(s) to fill /boot again, and only then can you reboot the host into PVE proper). the same applies to adding an extra disk/device for /boot.
Hi Fabian,

Maybe I don't understand this fully but the servers we run are not UEFI capable. Is there an alternative to GRUB in that case?
 

guletz

Well-Known Member
Apr 19, 2017
1,017
144
63
Brasov, Romania
Hi to all,

I was able to upgrade a 2 node cluster to the PMX 6.x I do not see any problem!

Good luck and succes on the upgrades!

Thx. also for all who contributed to this new version!
 

babak

New Member
Jul 13, 2019
1
0
1
46
Hi
I made cluster local directory /var/lib/vz shared but still for migrate only "Restart Mode" mode is available.
Is there a way do live migration containers with local directories in V6.0 ?

Regards
Shirazi
 
Last edited:

vanes

New Member
Nov 23, 2018
15
1
3
35
"zpool scrub rpool" causes server hang (need reset) when done from web-shell in pve6 (clean root on zfs raid10 install, uefi boot)
tested on two servers.
seems like bug. When i do it (zpool scrub rpool) from putty everything fine.
 
Last edited:

t.lamprecht

Proxmox Staff Member
Staff member
Jul 28, 2015
1,500
212
63
South Tyrol/Italy
I made cluster local directory /var/lib/vz shared but still for migrate only "Restart Mode" mode is available.
Is there a way do live migration containers with local directories in V6.0 ?
Containers can not be live migrated, idependnet of shared storage. The technology which tries to achieve that (CRIU) is just not yet ready (at least for CTs with network and normal distros running in there), and may need many years to get to a state where it can used for common CTs.

Also, just marking a storage as shared does not make it automatically shared. That flag is for the case where you locally mounted a network storage and added it to Proxmox VE as directory storage, then one can set the "shared" flag to tell PVE that even if it looks like that storage is local, it is already available on more nodes.
 

t.lamprecht

Proxmox Staff Member
Staff member
Jul 28, 2015
1,500
212
63
South Tyrol/Italy
"zpool scrub rpool" causes server hang (need reset) when done from web-shell in pve6 (clean root on zfs raid10 install, uefi boot)
tested on two servers.
seems like bug. When i do it (zpool scrub rpool) from putty everything fine.
Strange, should not happen, anything in the syslog during that (maybe keep a "dmesg -wT" and/or "journalctl -f" from the putty open during the operation).

I could not reproduce this here, but as of now I only had a RAIDz1 system with 4 disks available to test.. But would be strange if this is only for RAID10... Anything special on the system? Is it bare metal, and if which CPU vendor, ..?
 

vanes

New Member
Nov 23, 2018
15
1
3
35
Strange, should not happen, anything in the syslog during that (maybe keep a "dmesg -wT" and/or "journalctl -f" from the putty open during the operation).

I could not reproduce this here, but as of now I only had a RAIDz1 system with 4 disks available to test.. But would be strange if this is only for RAID10... Anything special on the system? Is it bare metal, and if which CPU vendor, ..?
It`s bare metal install root on zfs raid10, uefi boot, asrock e3c236d2i intel pentium G4560T 16G ecc-ram. Second test/backup/home server on consumer j4205 mb.
I started 1putty whith "dmesg -wT" another with "journalctl -f" and reproduced the problem (zpool scrub rpool in web shell) server hang. here are logs.

UPD. After reset server ran 13hours then i started "zpool scrub rpool" from putty and server hang again. Thats not pve-webshell related. If i run "zpool scrub rpool" right after reboot, everything fine. If i run "zpool scrub rpool" after some uptime it hangs the server.

very similar to this https://github.com/zfsonlinux/zfs/issues/7553
How should I proceed? return back to pve 5.4 ? How to disable monthly scrub, that will hang my server?
If i comment out line "24 0 8-14 * * root [ $(date +\%w) -eq 0 ] && [ -x /usr/lib/zfs-linux/scrub ] && /usr/lib/zfs-linux/scrub" in "/etc/cron.d/zfsutils-linux" will it turn off monthly scrub?

Settings that i did with my 2 servers:
governor powersave
limit arc
zfs set atime=off rpool
 

Attachments

Last edited:

Tomas Waldow

New Member
Dec 9, 2016
5
0
1
48
Hello, one incorrect information on the download page, the version informed is 6.0.4 but the iso file is 6.0.1. Is not a issue, but can confused someone, thanks.
 

Mikepop

New Member
Feb 6, 2018
23
0
1
45
Hi, I've noticed monitor latency skyrocket after update in all my clusters.


Any way to check why or how to improove/fix it?

Regards
 
Last edited:

rengiared

Member
Sep 8, 2010
96
0
6
Austria
A short information for all with infiniband adapters, with the update the name changed from ib0, ib1 to ibs3, ibs3d1. At least with my Mellanox ConnectX-2 Dual Port cards.
This brought the first node i updated to a complete stop as the networking service wouldn't start and i had to adjust my interfaces config.
with the next two nodes i updated the config beforehand and all went through quite smooth.

one question to the ceph upgrade. i have one OSD that was down and out during the upgrade (i was too lazy to remove it, unfortunately) and now it shows that it is an outdated OSD (12.2.11). can i upgrade to nautilus or should i get the OSD updated, when yes, how?
thanks in advance and keep up the great work!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!