Proxmox VE 6.0 released!

I upgraded 2 boxes. Both upgrades seemed to go smoothly, but on one machine I am unable to access the shell or the console for any of the VMs on it. I get the following error:

/root/.ssh/config line 1: Bad SSH2 cipher spec 'blowfish-cbc,aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc'.

The /root/.ssh/config file on both machines is identical:
Ciphers blowfish-cbc,aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc

I can access the machine remotely by ssh and I can also ssh into each of the VMs. I just can't access the shell or the consoles from the GUI.

Any help would be appreciated.
 
I do not see what it involves... API, corosync, PVE cluster FS or cluster management ?

Most happens in access control, but as they keys are on the clustered config filesystem (pmxcfs) and the rotation must be locked (so that not multiple nodes try to rotate it at the same time) that one is involved to. The access control code path only gets triggered on logins, but we wanted to do rotations even if no login happened, so the "pvestat daemon" also calls the rotation method if it's older than 24 hours, so that one is involved but only indirectly (and it would work without that too).
 
/root/.ssh/config line 1: Bad SSH2 cipher spec 'blowfish-cbc,aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc'.

Did you used our pve5to6 checklist script? That normally warns about those lines. (if not this should be a reminder for remaining nodes and for others, please use it!)

The /root/.ssh/config file on both machines is identical:

Ciphers blowfish-cbc,aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc
You can just delete that line.
 
Anyone else using Intel X520 10Gbit cards? I'm in the middle of upgrading my three nodes to v6 and while everything appears to be going very, very well, checking kernel logs reveals:
Code:
ixgbe 0000:03:00.0: Warning firmware error detected FWSW: 0x00000000

It doesn't appear to cause any issues, but every second that error appears and makes syslog unreadable.
 
I see that corosync v3 does not actually support multicast. AFAIK unicast, with corosync v2, was suggested only for a maximum o 4 nodes. Is that true with v3, too? Which are the new limits?

Depending on the switches used (latency and packets per seconds are much more important than bandwidth) one should be able to have about 16 - 20 nodes with commodity hardware, very fast (dedicated) switches and NICs and dedicated CPU processing power can help to achieve more, but then it's maybe easier to have less but "bigger" nodes.

Also kronosnet has some plans to (re-)integrate multicast, but no clear timeline, if that will happen PVE will try hard to support both.
 
So I had run pve5to6 on one of the machines. They have the same configuration. One works, one doesn't. I deleted the line as you suggested and I still have the same problem.

OK, I have to say I miss read your post and thought you had issues with the host shell, not the VM or CT console/shell. Firewall issue? Does the VM/CT still runs?
You probably should open another new thread regarding this, easier to help there and less "noise" here.
 
i get this at the bottom of apt update

E: The repository 'ttps://enterprise.proxmox.com/debian/pve buster Release' does not have a Release file.
N: Updating from such a repository can't be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details.

is this what im supposed to see?
 
I'm trying to upgrade a Proxmox 5.4 box to 6.0. pve5to6 did not report any problems.

My Apt is stuck on:

Code:
Setting up lxc-pve (3.1.0-61)

It has been running for over an hour now and doesn't seem to get anywhere. Any advice?
 
I'm trying to upgrade a Proxmox 5.4 box to 6.0. pve5to6 did not report any problems.

My Apt is stuck on:

Code:
Setting up lxc-pve (3.1.0-61)

It has been running for over an hour now and doesn't seem to get anywhere. Any advice?

check with "ps faxl" or similar where the configure call (it is one of the children of the running apt process) is blocking..
 
Just upgraded my server running 5.4, ZFS and a four disk setup with two striped mirrors, booting from the first mirror.
Hardware is a HP Microserver Gen8 with disk controller in AHCI SATA mode.
So for, every 5.4 kernel update had worked and the system would be able to reboot.

Now, after upgrading to 6.0 and reboot, the system crashes with

Code:
Attempting Boot From Hard Drive (C:)
error: no such device: (uuid)
error: unknow filesystem.
Entering rescue mode...
grub rescue>

I know this was mentioned in another thread, but this is the first time it happens and must be related to the upgrade to PVE 6.0.
Enclosed is the protocol of the upgrade...

Any idea of what might have gone wrong?

this and similar issues are the main reason we switched to a non-Grub UEFI bootloader setup with 6.0 - the Grub ZFS implementation is nearly unmaintained and severely behind the actual ZFS on Linux code base (it is basically a read-only parser for the on-disk ZFS data structures from a couple of years ago), with some as-yet-unfixed but hard/impossible to reproduce bugs that can lead to unbootable pools.. all the writes from the dist-upgrade probably made some on-disk structure line up in exactly one of those ways that Grub chokes on it. you can try to randomly copy/move/.. the kernel and initrd files in /boot around in the hopes of them being rewritten in a way that Grub "likes" again..

but the sensible way forward if you still have free space (or even ESP partitions, like if the server was setup with 5.4) on your vdev disks is to use "pve-efiboot-tool" to opt into the new bootloader setup. if that is not an option, you likely need to setup some sort of extra boot device that is not on ZFS, or redo the new bootloader setup by backup - reinstall - restore. we tried hard to investigate and fix these issues within Grub (I lost track of the number of hours I spent digging through Grub debug output via serial console sometime last year, and can personally attest that there are many many more fun ways to spend your time ;)), but in the end it is sometimes easier to cut your losses and start from scratch. as an intermediate solution / quick fix to get your system booted again, consider moving or copying your /boot partition to some external medium like a high-quality usb disk, or a spare disk if you have one.
 
  • Like
Reactions: smallsomething
what a great news, thanks to all proxmox team . question >>> it is ready production to use zfs root on NVME disks ?
 
Anyone else using Intel X520 10Gbit cards? I'm in the middle of upgrading my three nodes to v6 and while everything appears to be going very, very well, checking kernel logs reveals:
Code:
ixgbe 0000:03:00.0: Warning firmware error detected FWSW: 0x00000000

It doesn't appear to cause any issues, but every second that error appears and makes syslog unreadable.

I’m using X520 in my nodes - not upgraded yet though. Inclined now to wait until you work this out!
 
  • Like
Reactions: RokaKen

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!