Proxmox VE 6.2 released!

One of the announcements was support for up to 8 corosync links.

If more independent corosync links are used, does this mean it is more reasonable to have larger clusters, beyond 32 nodes?

If I have a cluster up and running currently with only link0, how can I configure more links?
 
If more independent corosync links are used, does this mean it is more reasonable to have larger clusters, beyond 32 nodes?

We know of a Proxmox VE 6.x clusters in the wild with >50 nodes, they have a good/stable dedicated cluster network and did some slight config tuning (IIRC) - but it is possible, albeit the network engineers took some time to get quite some experience and knowledge with the subject.
Cluster traffic profits most from having low latency, bandwidth is seldom an issue.
So using a dedicated network for the main link and others as fallback is recommended, especially for bigger setups.

If I have a cluster up and running currently with only link0, how can I configure more links?

Full integration in Webinterface works for new Clusters and on node join, for now.
Dynamic addition/deletion after the cluster was created still needs editing the corosync configuration manually, but it is relatively a straight forward editing process, documented here:
https://pve.proxmox.com/pve-docs/chapter-pvecm.html#_adding_redundant_links_to_an_existing_cluster
 
Thanks, I'm familiar with corosync.conf, just curious how the GUI was coming along.

You wouldn't happen to have any links to some further reading on the subject of those larger clusters, would you?
 
hi

i noticed that after the upgrade a container (debian 8) that i had running freeswitch (fusionpbx) on stopped starting services.
i was able to replicate after spinning up latest debian 9 template and installing fusionpbx. rebooting container will not start freeswitch service.

Code:
systemctl status freeswitch.service
* freeswitch.service - freeswitch
   Loaded: loaded (/lib/systemd/system/freeswitch.service; enabled; vendor preset: enabled)
   Active: failed (Result: exit-code) since Sun 2020-05-17 22:17:54 EDT; 16min ago
  Process: 5968 ExecStartPre=/bin/mkdir -p /var/run/freeswitch/ (code=exited, status=214/SETSCHEDULER)

May 17 22:17:54 siplxc systemd[1]: freeswitch.service: Control process exited, code=exited status=214
May 17 22:17:54 siplxc systemd[1]: Failed to start freeswitch.
May 17 22:17:54 siplxc systemd[1]: freeswitch.service: Unit entered failed state.
May 17 22:17:54 siplxc systemd[1]: freeswitch.service: Failed with result 'exit-code'.
May 17 22:17:54 siplxc systemd[1]: freeswitch.service: Service hold-off time over, scheduling restart.
May 17 22:17:54 siplxc systemd[1]: Stopped freeswitch.
May 17 22:17:54 siplxc systemd[1]: freeswitch.service: Start request repeated too quickly.
May 17 22:17:54 siplxc systemd[1]: Failed to start freeswitch.
May 17 22:17:54 siplxc systemd[1]: freeswitch.service: Unit entered failed state.
May 17 22:17:54 siplxc systemd[1]: freeswitch.service: Failed with result 'exit-code'.

i did the same exact install inside kvm and everything works and starts as expected. i started thread on fusionpbx forum with not much feedback. any help is appreciated.
 
Hi All, I am new to Proxmox and the other day I installed version 6.1 and see 6.2 is out

when I go to run command apt-get update I get the following error below.

Is this because I don't have a subscription?



Hit:1 http://security.debian.org buster/updates InRelease
Hit:2 http://ftp.au.debian.org/debian buster InRelease
Hit:3 http://ftp.au.debian.org/debian buster-updates InRelease
Err:4 https://enterprise.proxmox.com/debian/pve buster InRelease
401 Unauthorized [IP: 51.91.38.34 443]
Reading package lists...
E: Failed to fetch https://enterprise.proxmox.com/debian/pve/dists/buster/InRelease 401 Unauthorized [IP: 51.91.38.34 443]
E: The repository 'https://enterprise.proxmox.com/debian/pve buster InRelease' is not signed.
TASK ERROR: command 'apt-get update' failed: exit code 100

Regards

Matthew
 
Hi,
I have tried to upgrade from 6.1 to 6.2 without success. I try to do it from shell and via ssh.

What do I wrong?

root@pve:/# apt update && apt dist-upgrade Hit:1 http://security.debian.org/debian-security buster/updates InRelease Hit:2 http://download.proxmox.com/debian/pve buster InRelease Reading package lists... Done Building dependency tree Reading state information... Done 2 packages can be upgraded. Run 'apt list --upgradable' to see them. Reading package lists... Done Building dependency tree Reading state information... Done Calculating upgrade... Done The following packages have been kept back: pve-qemu-kvm 0 upgraded, 0 newly installed, 0 to remove and 1 not upgraded.

1590328265765.png
 
I just updated one of my machines to 6.2-4 and two of my vms are not starting. Windows 2008 and 2016.
Both are using BIOS and not UEFI, and they can't find a bootable disk.

"No bootable device" from Seabios

Underlying storage (ZFS) looks ok. Just started debugging it now.

I have another Windows 2016 machine that is booting fine. Haven't yet figured out what is going on

EDIT: Ok, I figured it out. I was running a custom bios that I built that had SLIC tables from the host machine installed into for OEM Windows activation. This was a problem many years ago with the bundled BIOS in Proxmox, I have disabled this custom BIOS and its booting now. No idea what is going to happen with Windows activation though..

/usr/share/kvm/bios-SLIC-1722.bin

EDIT2: Ok, windows inactivated itself, going to have to rebuild the newer BIOS I guess for proxmox compatibility. Time to dig up those instructions.
 
Last edited:
  • Like
Reactions: t.lamprecht
  • Like
Reactions: Olivbus
It seems like you're missing some repositories in the apt source file, please add:
Code:
deb http://ftp.debian.org/debian buster main contrib
deb http://ftp.debian.org/debian buster-updates main contrib
to your /etc/apt/sources.list and then retry.

Thank you for your support. Now it works.
Regards
Oliver
 
Hi there, today I've upgraded the first host in a 4 nodes cluster with fiberchannel shared storage on HP MAS2050 SAN. The Host HBA is the 82Q from HP based on Qlogic chipset.
0b:00.1 Fibre Channel: QLogic Corp. ISP2532-based 8Gb Fibre Channel to PCI Express HBA (rev 02)
Subsystem: Hewlett-Packard Company StorageWorks 82Q
There must be something with the 5.4-41 kernel
Linux pve4 5.4.41-1-pve #1 SMP PVE 5.4.41-1 (Fri, 15 May 2020 15:06:08 +0200) x86_64 GNU/Linux
because after reboot I didn't find any lv stored on the san (used as LVM shared) but I found in syslog some errors
May 26 15:00:59 pve4 kernel: [ 1.939708] qla2xxx [0000:00:00.0]-0005: : QLogic Fibre Channel HBA Driver: 10.01.00.19-k.
May 26 15:00:59 pve4 kernel: [ 1.940159] qla2xxx [0000:0b:00.0]-001a: : MSI-X vector count: 32.
May 26 15:00:59 pve4 kernel: [ 1.940163] qla2xxx [0000:0b:00.0]-001d: : Found an ISP2532 irq 16 iobase 0x(____ptrval____).
When recognized at boot time, no lun is recognized and thus multipath has no device to present its own mapping.
If I force a reset of the HBA and issue_lip
echo "1" > /sys/class/fc_host/host4/issue_lip
I get other errors in syslog
[ 872.832162] qla2xxx [0000:0b:00.1]-00fb:4: QLogic HPAJ764A - HPE 82Q 8Gb Dual Port PCI-e FC HBA.
[ 872.832173] qla2xxx [0000:0b:00.1]-00fc:4: ISP2532: PCIe (5.0GT/s x8) @ 0000:0b:00.1 hdma+ host#=4 fw=8.07.00 (90d5).
[ 873.243009] qla2xxx [0000:0b:00.1]-500a:4: LOOP UP detected (8 Gbps).
[ 886.545479] rport-2:0-0: blocked FC remote port time out: removing target and saving binding
[ 887.569457] rport-4:0-0: blocked FC remote port time out: removing target and saving binding
[ 915.975383] qla2xxx [0000:0b:00.1]-5039:4: Async-tmf error - hdl=6 completion status(28).
[ 915.975638] qla2xxx [0000:0b:00.1]-8030:4: TM IOCB failed (102).
[ 918.945929] qla2xxx [0000:0b:00.0]-5039:2: Async-tmf error - hdl=6 completion status(28).
[ 918.946193] qla2xxx [0000:0b:00.0]-8030:2: TM IOCB failed (102).
[ 925.844633] qla2xxx [0000:0b:00.1]-500b:4: LOOP DOWN detected (2 3 0 0).
[ 926.697302] qla2xxx [0000:0b:00.1]-500a:4: LOOP UP detected (8 Gbps).
[ 929.055264] qla2xxx [0000:0b:00.0]-500b:2: LOOP DOWN detected (2 3 0 0).
[ 929.908041] qla2xxx [0000:0b:00.0]-500a:2: LOOP UP detected (8 Gbps).
I also commanded a reset of the ports on the SAN but with no effect.
If I boot with previous kernel, all is fine.
pve-kernel-5.3.18-3-pve
 
Hi there, today I've upgraded the first host in a 4 nodes cluster with fiberchannel shared storage on HP MAS2050 SAN. The Host HBA is the 82Q from HP based on Qlogic chipset.

Can you try a slightly older but still 5.4 based kernel: apt install pve-kernel-5.4.34-1-pve and select it on reboot?
Also, the pve-kernel-5.4.27-1-pve could be good to test to, if possible for you.
There was some churn and bigger series of backports in those versions, so could be good as a quick check to see where the regression is introduced.
 
Can you try a slightly older but still 5.4 based kernel: apt install pve-kernel-5.4.34-1-pve and select it on reboot?
Also, the pve-kernel-5.4.27-1-pve could be good to test to, if possible for you.
With 5.4.34-1-pve isn't working
With 5.4.27-1-pve isn't working

Still bound to 5.3 kernel working.
 
hi

i noticed that after the upgrade a container (debian 8) that i had running freeswitch (fusionpbx) on stopped starting services.
i was able to replicate after spinning up latest debian 9 template and installing fusionpbx. rebooting container will not start freeswitch service.

Code:
systemctl status freeswitch.service
* freeswitch.service - freeswitch
   Loaded: loaded (/lib/systemd/system/freeswitch.service; enabled; vendor preset: enabled)
   Active: failed (Result: exit-code) since Sun 2020-05-17 22:17:54 EDT; 16min ago
  Process: 5968 ExecStartPre=/bin/mkdir -p /var/run/freeswitch/ (code=exited, status=214/SETSCHEDULER)

May 17 22:17:54 siplxc systemd[1]: freeswitch.service: Control process exited, code=exited status=214
May 17 22:17:54 siplxc systemd[1]: Failed to start freeswitch.
May 17 22:17:54 siplxc systemd[1]: freeswitch.service: Unit entered failed state.
May 17 22:17:54 siplxc systemd[1]: freeswitch.service: Failed with result 'exit-code'.
May 17 22:17:54 siplxc systemd[1]: freeswitch.service: Service hold-off time over, scheduling restart.
May 17 22:17:54 siplxc systemd[1]: Stopped freeswitch.
May 17 22:17:54 siplxc systemd[1]: freeswitch.service: Start request repeated too quickly.
May 17 22:17:54 siplxc systemd[1]: Failed to start freeswitch.
May 17 22:17:54 siplxc systemd[1]: freeswitch.service: Unit entered failed state.
May 17 22:17:54 siplxc systemd[1]: freeswitch.service: Failed with result 'exit-code'.

i did the same exact install inside kvm and everything works and starts as expected. i started thread on fusionpbx forum with not much feedback. any help is appreciated.
Good time of day, everyone!
I have exactly the same error for Freeswitch on the Debian 10 guest operating system deployed in an unprivileged container on Proxmox VE version 6.1. It seems that this is a feature of an unprivileged container. There is no such error in the privileged container.
 
OK, thank you for reporting back. Could be also outdated firmware not matching the version/behavior of the newer kernel module version.
So I spent several hours of testing and upgrading all I had to upgrade in terms of firmware and so on.
What I discovered is that:
  • In QLogic firmware setup is disabled the "Enable Lip Reset" so I think is the reason I wasted a lot of time before submitting my request here. (CTRL+Q at post time when prompted), also it is enabled the TAPE support which I disabled but I don't think is relevant
  • Somehow the new kernel driver has to negotiate also the connection mode to the san which was by default set to "pont-to-point", see the attached image
1590659965097.png
Having changed the parameters (Ports A4 and B4) now the link is UP and I see the lv on the latest kernel.

root@pve4:~# uname -a
Linux pve4 5.4.41-1-pve #1 SMP PVE 5.4.41-1 (Fri, 15 May 2020 15:06:08 +0200) x86_64 GNU/Linux
root@pve4:~# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
data pve twi-aotz-- <77.20g 0.00 1.60
root pve -wi-ao---- 34.00g
swap pve -wi-ao---- 7.00g
vm-100-disk-0 san4T -wi-a----- 32.00g
vm-101-disk-0 san4T -wi-a----- 32.00g
vm-102-disk-0 san4T -wi-a----- 250.00g
vm-103-disk-0 san4T -wi-a----- 40.00g
vm-103-disk-1 san4T -wi-a----- 1.00t
vm-104-disk-0 san4T -wi-a----- 250.00g
vm-105-disk-0 san4T -wi-a----- 256.00g
vm-107-disk-0 san4T -wi-a----- 32.00g
vm-110-disk-0 san4T -wi-a----- 16.00g
vm-201-disk-0 san4T -wi-a----- 120.00g
vm-205-disk-0 san4T -wi-a----- 500.00g
vm-205-disk-1 san4T -wi-a----- 250.00g
vm-205-disk-2 san4T -wi-a----- 100.00g
vm-205-disk-3 san4T -wi-a----- 50.00g
vm-301-disk-0 san4T -wi-a----- 150.00g
vm-303-disk-0 san4T -wi-a----- 146.00g
vm-304-disk-0 san4T -wi-a----- 50.00g
vm-305-disk-0 san4T -wi-a----- 32.00g
vm-201-disk-0 sanSAS15K -wi-a----- 250.00g
vm-304-disk-0 sanSAS15K -wi-a----- 250.00g
root@pve4:~#
 
Having changed the parameters (Ports A4 and B4) now the link is UP and I see the lv on the latest kernel.

OK, glad you figured it out and thanks for sharing the information about how you resolved this issue!
 
Hello everyone! Last week the apt update && apt dist-upgrade to 6.2 worked fine on a SoYouStart server which previously had been installed with 6.x via the SoYouStart installer. Yesterday, installing and updating a second, identical server, both the SoYouStart install and the update && upgrade process seemed to complete successfully. However, upon reboot, the server came back up with the entire contents of /etc/pve missing. Same result upon trying again. Also same result when reinstalling machine on which the update had worked just fine a week ago. Ideas? Thanks! Tom
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!