Proxmox VE 6.0 released!

Michael Herf

New Member
May 31, 2019
4
0
1
44
We internally measured the difference between a 5.0 kernel with the "FPU/SIMD symbol exports" and without them, they both where pretty similar (i.e., difference was mostly measurement noise) but were in general somewhat noticeable slower than with the 4.15 kernel, we continuously check ZFS and Kernel for possible improvements regarding this. Note also that the upcoming ZFS 0.8.2 will again have SIMD support and thus any possible performance loss from that area should soon be gone again.
Would you please include RAID-Z2 in this test? My configuration is a 6-disk RAID-Z2 with Xeon 4110. Also, I'm seeing no difference in regular striped volumes, but the RAID-Z2 fio seqwrite performance is less than half.

A 50% performance regression is a big deal. The patch to re-enable SIMD on 5.0 kernels only landed a few days ago, so I imagine it needs some time as well before release?
 

Compizfox

New Member
Apr 15, 2017
9
0
1
23
assuming your system is still in this state, you can kill PID 3535 and see if the rest of the update proceeds. it will likely tell you that (at least) lxc-pve failed to be completely upgraded, which should be done with "apt install -f"
Thanks. I tried that but it doesn't do anything.

I killed apt and dpkg, but trying to finish it with dpkg --configure -a runs into the exact same problem again.
 

fabian

Proxmox Staff Member
Staff member
Jan 7, 2016
3,468
540
113
Thanks. I tried that but it doesn't do anything.

I killed apt and dpkg, but trying to finish it with dpkg --configure -a runs into the exact same problem again.
if you don't have any containers running, can you try stopping the two lxc services (lxc-monitord and lxc-net) with 'systemctl stop' and then repeating the 'dpkg --configure -a'
 

Compizfox

New Member
Apr 15, 2017
9
0
1
23
if you don't have any containers running, can you try stopping the two lxc services (lxc-monitord and lxc-net) with 'systemctl stop' and then repeating the 'dpkg --configure -a'
I do.

Right now it managed to upgrade everything except for the PVE stuff, it looks like:

Code:
Errors were encountered while processing:
 lxc-pve
 pve-container
 pve-manager
 proxmox-ve
 pve-ha-manager
 qemu-server
Right now I'll try stopping my VMs/CTs and trying again.

EDIT: Still no luck, even with the LXC services stopped. I'm not sure if I should reboot, I don't want to lock myself out.

EDIT: Was able to complete the upgrade after rebooting.
 
Last edited:

NoahD

New Member
Jun 28, 2019
10
0
1
49
Please edit your ceph.conf (/etc/pve/ceph.conf) it still has the "keyring" entry in the global section. Remove it from there and move it down to the client section so that it looks like:

Code:
[client]
         keyring = /etc/pve/priv/$cluster.$name.keyring
This is something we warn you if you use our "pve5to6" checklist script.
Please, all of you who upgrade: read the upgrade docs closely and use the script to check basic stuff!
Figured out the problems and mostly my fault for not paying close attention to certain issues after the Proxmox upgrade from version 5.4 to 6. Even though my test environment upgraded without issues but it didn't have the 10 gig network cards which is what being used for Ceph-Cluster. I still haven't upgraded ceph to Nautilus yet on my test environment.

So what had happened was after the upgrade to "Buster" it changed the 10 gig network names to something else leaving the Ceph-Cluster in limbo which caused the ceph monitor / manager daemons to constantly stop and start on my three nodes. This was still on Luminous version, just like my test environment. I thought maybe it needed to be upgraded. Well, it didn't occur to me to actually look for other issues other than just do an upgrade in hopes that it'll fix itself. Well, because it wasn't able to properly upgrade the three nodes in preparation for Nautilus it couldn't upgrade the Ceph drives to the newer version which left it in an state that couldn't be used. I was able to fix that by using
ceph-volume simple scan /dev/sdd1 which will give you the proper command to update the hard drive to the new version and enable it for ceph. You will have to do this on all nodes if it's not seeing the drives properly. This is noted in the Proxmox upgrade notes.

So had to fix several issues. One trying to get the Ceph-Cluster to work properly under the new 10 gig network scheme and getting the ceph drives online. Another is fixing the ceph.conf to properly show the monitors by adding mon_host = xx.xx.xx.xx in the [global] section.

So after you do the upgrade to Buster and your ceph-cluster starts to crap out CHECK YOUR ceph-cluster network FIRST!

Thanks to ceph's resilience I didn't lose any data and everything is working. Just lost a few hours of my time sorting this out. Again, most of it is my fault for not paying close attention to details.
 

sdfungi

New Member
Jul 17, 2019
5
0
1
43
I’m using X520 in my nodes - not upgraded yet though. Inclined now to wait until you work this out!
I am running Intel DA X520 I went from 5.4 -> 6.0-1 Beta -> 6.0-4 Release and do not have any issues, or firmware warnings in my syslog. Though I do run firmware checks and updates quarterly and had done so just before the install.
 
Jan 21, 2017
280
26
28
30
Berlin
Just upgraded some Hetzner nodes without any issues.
PX61, PX62, EX61

Upgrade with the following models will follow within 3 month.
PX92, EX52

Different models might follow.
 
Last edited:

vamp

New Member
Jun 24, 2017
9
0
1
33
Hi there,

I would like to update my proxmox (one node, without cluster, without ceph... only local storage)

two question:

I need to do two point in update steps? (
1. Cluster: always upgrade to Corosync 3 first
2. Update Ceph Luminous to Nautilus)

second question: my firewall is a VM (VyOS) if i stop it (the checker say warning if a VM is running) no internet and i can not update. so it is a big problem that i update the system with running VM?
 

vanes

New Member
Nov 23, 2018
15
1
3
35
Just installed clear 6.0 root on zfs with UEFI boot and trying to limit ARC size. The way i did it (add "options zfs zfs_arc_max=2147483648" to "/etc/modprobe.d/zfs.conf" then "update-initramfs -u" then reboot) does not work anymore with UEFI boot.
Command "echo 2147483648 > /sys/module/zfs/parameters/zfs_arc_max" do its job after some hours of uptime.
In beta thread i was advised to implement it via /etc/rc.local. The question is how to do it? I am not linux guru and afraid to break something =)
What is the right way to limit ARC size on pve6 root on zfs with uefi boot?
 

t.lamprecht

Proxmox Staff Member
Staff member
Jul 28, 2015
1,500
212
63
South Tyrol/Italy
I need to do two point in update steps? (
1. Cluster: always upgrade to Corosync 3 first
2. Update Ceph Luminous to Nautilus)
https://pve.proxmox.com/wiki/Upgrade_from_5.x_to_6.0#Actions_step-by-step

second question: my firewall is a VM (VyOS) if i stop it (the checker say warning if a VM is running) no internet and i can not update. so it is a big problem that i update the system with running VM?
No, this is normally just an issue if you really want to keepo it running and have a cluster, as then you can (live) migrate it away. The node needs to be rebooted after the upgrade, so sooner or later the vm will stop through the host node reboot if it's not moved. If this is a single node it's no issue.
 

t.lamprecht

Proxmox Staff Member
Staff member
Jul 28, 2015
1,500
212
63
South Tyrol/Italy
Just installed clear 6.0 root on zfs with UEFI boot and trying to limit ARC size. The way i did it (add "options zfs zfs_arc_max=2147483648" to "/etc/modprobe.d/zfs.conf" then "update-initramfs -u" then reboot) does not work anymore with UEFI boot.
Command "echo 2147483648 > /sys/module/zfs/parameters/zfs_arc_max" do its job after some hours of uptime.
In beta thread i was advised to implement it via /etc/rc.local. The question is how to do it? I am not linux guru and afraid to break something =)
What is the right way to limit ARC size on pve6 root on zfs with uefi boot?
Please do not use "/etc/rc.local" that's not our recommendation (if it even works).

The modprobe location is still correct, see:
https://pve.proxmox.com/pve-docs/chapter-sysadmin.html#_limit_zfs_memory_usage

The update initramfs hint is also still correct, but with the new UEFI setup you additionally need to run:
Code:
pve-efiboot-tool refresh
To sync the changes out to all the EFI System Partitions, see https://pve.proxmox.com/pve-docs/chapter-sysadmin.html#sysboot_systemd_boot for details
 
  • Like
Reactions: hanru and vanes

t.lamprecht

Proxmox Staff Member
Staff member
Jul 28, 2015
1,500
212
63
South Tyrol/Italy
/etc/rc.local is very old startup script
I know, and that's fine if it works for you. But as it's only executed at the end of a multiuser "runlevel" (i.e., the compat runlevel systemd target) it can still be a bit late if VMs CTs got already started and made great use of the ARC, with my linked way the ZFS ARC is already limited from the initial boot stage.
 

0C0C0C

New Member
Jul 12, 2019
9
0
1
34
I switched from pvetest to pve-no-subscription, no new Package Updates.
My Version is 6.0-4/2a719255, this is the actual final version?
 

t.lamprecht

Proxmox Staff Member
Staff member
Jul 28, 2015
1,500
212
63
South Tyrol/Italy
I switched from pvetest to pve-no-subscription, no new Package Updates.
My Version is 6.0-4/2a719255, this is the actual final version?
Yes, currently all three repos are in sync. Also pvetest will always be newer (or at least as equal new) than no-subscription, so this is expected in general.
Packages transitions are: git + dev. testing -> pvetest -> pve-no-subscription -> pve-enterprise
 
  • Like
Reactions: 0C0C0C
Aug 10, 2018
18
2
3
39
On the PVE 6 the Active Directory connection stopped working.

With same settings in PVE 5.4.11 it works fine on PVE 6.04 login with AD accounts not possible.

Where I can check this issue in CLI ?
 

sdet00

New Member
Nov 18, 2017
21
1
3
I followed the steps here and whilst the pve5to6 checklist says that I am on Proxmox 6, it still shows 5.4.11 in the WebUI. Is this a bug?

https://pve.proxmox.com/wiki/Upgrade_from_5.x_to_6.0

Am I missing something? It is also showing 41 packages that can be upgraded under "origin: Proxmox", but if I run apt dist-upgrade, it doesn't seem to upgrade any of these packages.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!