Proxmox VE 6.0 released!

I upgraded 2 boxes. Both upgrades seemed to go smoothly, but on one machine I am unable to access the shell or the console for any of the VMs on it. I get the following error:

/root/.ssh/config line 1: Bad SSH2 cipher spec 'blowfish-cbc,aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc'.

The /root/.ssh/config file on both machines is identical:
Ciphers blowfish-cbc,aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc

I can access the machine remotely by ssh and I can also ssh into each of the VMs. I just can't access the shell or the consoles from the GUI.

Any help would be appreciated.

Hello,

Try to eliminate only one chipher in a single turn and see if you have luck, First I will try arcfour. Then any 128*.

good luck!
 
Hi,

Congratulations and my respects to all devs and their time spent for this new PMX. version. Also the same for the all users who spent time to test the beta version. And not least for the users who send some feedbacks of theirs success/fail cases. This week-end I will hope to use this new version ....!

Thx. to all , and good luck!
 
  • Like
Reactions: Alwin and stark
I’m using X520 in my nodes - not upgraded yet though. Inclined now to wait until you work this out!

I pushed through the upgrades and so far everything is working well. Still have the error spamming the kernel ring buffer. I did change rsyslogd to not log those lines in /var/log/kern.log at least so those aren't filling up.

My guess is that I'll have to see if there's firmware for the x520 and see if that does anything.
 
check with "ps faxl" or similar where the configure call (it is one of the children of the running apt process) is blocking..
Thanks.

So looks like it's blocking on asking a password on the console?

Code:
4     0  8322     1  20   0  96412 40244 poll_s S    ?          0:08 apt full-upgrade
4     0  2187  8322  20   0  12476  7256 do_wai Ss+  pts/3      0:00  \_ /usr/bin/dpkg --status-fd 21 --configure --pending
0     0  3331  2187  20   0   2388  1600 do_wai S+   pts/3      0:00      \_ /bin/sh /var/lib/dpkg/info/lxc-pve.postinst configure 3.1.0-3
0     0  3520  3331  20   0  10868  3672 poll_s S+   pts/3      0:25          \_ /bin/systemctl restart lxc-monitord.service lxc-net.service
4     0  3535  3520  20   0  19104  3368 poll_s S+   pts/3      0:00              \_ /bin/systemd-tty-ask-password-agent --watch

(I did this over SSH, and physical access to the machine's console is cumbersome...)
 
I upgraded the first small cluster of 3 nodes from 5.4 to 6.0 without any issues. zfs trim is running now.

I love the new live migration option for VM's! (too bad you'll loose all your zfs snapshots after migration, but you cant have everything. :)
 
Finished the upgrade and the only issue I had was my Infiniband card interface names changed from ifbX to ifbpXsY, so I had to update my interfaces config after the upgrade.
 
I've upgraded my test environment which worked without issues. Then upgraded the production environment and getting this error
auth: unable to find a keyring on /etc/pve/priv/ceph.mon.pve4.keyring: (13) Permission denied and ceph daemon isn't starting. It's happening on all of my 3 nodes. Any ideas?
 
I'm pretty sure this build is suffering from much lower ZFS write performance in kernel 5.0/ZFS 0.8 - it's about half as fast for me.

On a fresh RAIDz2 with 6 disks, I'm getting 200MB/sec writes (uncompressed), whereas Proxmox 5.4 gives me 450MB/sec.

(ZFSOnLinux github has issue 8836 which parallels this, but I'm not allowed to make a link here.)
 
auth: unable to find a keyring on /etc/pve/priv/ceph.mon.pve4.keyring: (13) Permission denied and ceph daemon isn't starting. It's happening on all of my 3 nodes. Any ideas?

Please edit your ceph.conf (/etc/pve/ceph.conf) it still has the "keyring" entry in the global section. Remove it from there and move it down to the client section so that it looks like:

Code:
[client]
         keyring = /etc/pve/priv/$cluster.$name.keyring

This is something we warn you if you use our "pve5to6" checklist script.
Please, all of you who upgrade: read the upgrade docs closely and use the script to check basic stuff!
 
  • Like
Reactions: DerDanilo and stark
On a fresh RAIDz2 with 6 disks, I'm getting 200MB/sec writes (uncompressed), whereas Proxmox 5.4 gives me 450MB/sec.

We internally measured the difference between a 5.0 kernel with the "FPU/SIMD symbol exports" and without them, they both where pretty similar (i.e., difference was mostly measurement noise) but were in general somewhat noticeable slower than with the 4.15 kernel, we continuously check ZFS and Kernel for possible improvements regarding this. Note also that the upcoming ZFS 0.8.2 will again have SIMD support and thus any possible performance loss from that area should soon be gone again.
 
I don't know why, but after the upgrade from beta to stable I have weird time jumps on my server and also the CPU usage and server load graphs don't show anything. their start and end time are both showing 1970-01-01 01:00:00. also If I by any chance manage to login into the UI, then after the time si set back I have to re-login as it says I have invalid PVE ticket.

I have run hwclock --systohc but it doesn't seem to help.
Next command were run a few seconds apart
root@h01:~# date
Wed 17 Jul 2019 02:43:36 PM UTC
root@h01:~# date
Wed 17 Jul 2019 06:38:06 AM UTC
 
Thanks.

So looks like it's blocking on asking a password on the console?

Code:
4     0  8322     1  20   0  96412 40244 poll_s S    ?          0:08 apt full-upgrade
4     0  2187  8322  20   0  12476  7256 do_wai Ss+  pts/3      0:00  \_ /usr/bin/dpkg --status-fd 21 --configure --pending
0     0  3331  2187  20   0   2388  1600 do_wai S+   pts/3      0:00      \_ /bin/sh /var/lib/dpkg/info/lxc-pve.postinst configure 3.1.0-3
0     0  3520  3331  20   0  10868  3672 poll_s S+   pts/3      0:25          \_ /bin/systemctl restart lxc-monitord.service lxc-net.service
4     0  3535  3520  20   0  19104  3368 poll_s S+   pts/3      0:00              \_ /bin/systemd-tty-ask-password-agent --watch

(I did this over SSH, and physical access to the machine's console is cumbersome...)

assuming your system is still in this state, you can kill PID 3535 and see if the rest of the update proceeds. it will likely tell you that (at least) lxc-pve failed to be completely upgraded, which should be done with "apt install -f"
 
Thanks a lot for your hard work devs! Love the new features like TRIM support and live migration with local disks.

I need to upgrade some 10+ node clusters and doing them all in one go won't be an option.
Would running 5.4 and 6.0 in the same cluster be an issue for a a few weeks after upgrading to corosync 3?
 
Good to see that no IGMP settings are required on the switch anymore. Just deleted the proxy (is this the right word?) and cluster is still good condition :)
 
Upgrade from 5.x to 6.0 went fine without errors. But I am missing the pve command.
Code:
-bash: pve: command not found
has the command changed?
 
Upgrade from 5.x to 6.0 went fine without errors. But I am missing the pve command.
Did you provide the 'pve' command yourself? Double tab to get the auto-completion.
 
Did you provide the 'pve' command yourself? Double tab to get the auto-completion.

double tab on 'pve' gives me

Code:
root@pve:~# pve

pve5to6            pvebanner          pvecm              pve-efiboot-tool   pvefw-logger       pve-ha-lrm         pvemailforward.pl  pveperf            pvereport          pvesm              pvestatd           pveum              pveupgrade
pveam              pveceph            pvedaemon          pve-firewall       pve-ha-crm         pvemailforward     pvenode            pveproxy           pvesh              pvesr              pvesubscription    pveupdate          pveversion

root@pve:~# pve

no pve command there ....
 
Would running 5.4 and 6.0 in the same cluster be an issue for a a few weeks after upgrading to corosync 3?

Hmm, I'd not advise to it but as long as you can guarantee that you do not need to add/delete nodes you should be fine - it can be possible but it's a bit hacky and your basically on your own if you need to do cluster changes.
And this really should be a temporary solution, though!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!