Proxmox VE 7.2 released!

Zerstoiber

New Member
Jun 2, 2022
2
0
1
It is. Set the shutdown policy to migrate and it will happen for both reboot and shutdown. The section you quoted describes what's happening if the conditional is configured, could be structured or hinted a bit more explicit there.

Thomas, sorry, you are right!
In my testlab (configured for "migrate"), shutdown performed the migration as expected.

I just reviewed the lab environment for why it did not work on restart of a node.
Turns out I did some testing in between, cloned some VMs and forgot to add them to HA! So those did not get migrated later, but frozen instead.


Coming from VMware, some things require a bit more learning - in vCenter, you don't have to explicitly enable HA for each VM. It's the other way around, it lets you configure overrides to not perform actions when HA/DRS is globally enabled.
 
Last edited:

hookas

New Member
Jun 9, 2022
3
1
3
After upgrade to 7.2 we have many kernel panic for KVM linux machines (various OS From Debian 7 to Debian 11 latest with qemu agents or without) when Live migration performed from one to other host in datacenter all directions.

E.g. 2-3 VM crashed with kernel panic from 5 migrated randomly and for safe migrations (for data lost prevention in damaged file systems) we must shutdown VM before migrations.

We didn't has any problems with Live migration in KVM virtualization type before upgrade to VE 7.2
Is any solution and maybe Proxmx team know about kernel panic problems with KVM VM Live migrations?

We use ZFS volumes rpool in all hosts

Our hosts:
Host 1:
16 x Intel(R) Xeon(R) CPU E5620 @ 2.40GHz (2 Sockets)
Kernel Version Linux 5.15.35-1-pve #1 SMP PVE 5.15.35-3
PVE Manager Version 7.2-4

Host 2:
4 x Intel(R) Xeon(R) CPU E3-1220 v3 @ 3.10GHz (1 Socket)
Kernel Version Linux 5.15.35-1-pve #1 SMP PVE 5.15.35-3
PVE Manager Version 7.2-4

Host 3:
16 x Intel(R) Xeon(R) Silver 4110 CPU @ 2.10GHz (1 Socket)
Kernel Version Linux 5.15.35-1-pve #1 SMP PVE 5.15.35-3
PVE Manager Version 7.2-4

Host 4:
12 x Intel(R) Xeon(R) CPU E5-2630 v2 @ 2.60GHz (1 Socket)
Kernel Version Linux 5.15.35-1-pve #1 SMP PVE 5.15.35-3
PVE Manager Version 7.2-4
 

aaron

Proxmox Staff Member
Staff member
Jun 3, 2019
2,868
444
88
After upgrade to 7.2 we have many kernel panic for KVM linux machines (various OS From Debian 7 to Debian 11 latest with qemu agents or without) when Live migration performed from one to other host in datacenter all directions.
There seems to be a bug in the Kernel when live migrating from newer to older CPUs. We are currently evaluating if we can back port it (see our bugtracker: https://bugzilla.proxmox.com/show_bug.cgi?id=4073#c27).

The CPUs in your cluster seem to span quite a few generations. As a workaround for the time being, you can use an older Kernel. 5.13 should work fine AFAIK. You can use the "proxmox-boot-tool kernel pin" command in order to not have to select the kernel on each reboot.
 

hookas

New Member
Jun 9, 2022
3
1
3
There seems to be a bug in the Kernel when live migrating from newer to older CPUs. We are currently evaluating if we can back port it (see our bugtracker: https://bugzilla.proxmox.com/show_bug.cgi?id=4073#c27).

The CPUs in your cluster seem to span quite a few generations. As a workaround for the time being, you can use an older Kernel. 5.13 should work fine AFAIK. You can use the "proxmox-boot-tool kernel pin" command in order to not have to select the kernel on each reboot.

I hope this bug will be fixed soon. Yes with older kernel problem gone...
 

lDemoNl

Member
Oct 23, 2020
53
2
8
29
Hi! In upgrade process to v7.2, looks like it is corrupted. Because node gone in offline mod and on display I can see block device error (screenshot added) But after rebooting node starts normally. I dont know how does it affect on Proxmox
dmesg.PNG
 

guzi

New Member
Dec 15, 2020
4
0
1
57
There seems to be a bug with the new version in the multipath handling on fiber channel LUN's (detected on Cisco Blades / VIC FCoE HBA).
In some circumstances the fiber channel connections are getting lost.
kernel: [2683802.439925] sd 1:0:1:7: Power-on or device reset occurred
kernel: [2683854.653599] device-mapper: multipath: 253:56: Reinstating path 65:144.
kernel: [2683854.653807] sd 2:0:0:4: Power-on or device reset occurred
kernel: [2683854.654398] sd 2:0:0:4: emc: ALUA failover mode detected
kernel: [2683854.654673] sd 2:0:0:4: emc: Found valid sense data 0x 5, 0x24, 0x 0 while sending CLARiiON trespass command.
kernel: [2683854.654746] sd 2:0:0:4: emc: at SP A Port 5 (bound, default SP B)
kernel: [2683854.654749] device-mapper: multipath: 253:56: Failing path 65:144.
The multipathd is then switching around to find another way to keep the LUN up, but sometimes this is not working and the server looses all connections to the storage.

The workaround to start kernel 5.13 via the grub boot menu seems to fix the issue, but 5.15 is not ready in terms of multipath FC usage.
 

ThaFacialHair

New Member
Jun 27, 2022
2
0
1
Just a heads up be careful installing this and using a VGA monitor. Thanks to the update to the new Kernel when you try to install it will give a "out of range" or similar warning. Only work around so far is to install 7.1 and update from that
 

Guy

Active Member
Jan 15, 2009
118
1
38
m0guy.com
Since upgrading to 7.2-5 I'm finding that the mouse pointer on the noVNC console is way off from the local machine. This is making it almost impossible to use the virtual systems.

I have seen this on both windows 7, 10 and Ubuntu desktops.

1656431771621.png
 

hookas

New Member
Jun 9, 2022
3
1
3
There seems to be a bug in the Kernel when live migrating from newer to older CPUs. We are currently evaluating if we can back port it (see our bugtracker: https://bugzilla.proxmox.com/show_bug.cgi?id=4073#c27).

The CPUs in your cluster seem to span quite a few generations. As a workaround for the time being, you can use an older Kernel. 5.13 should work fine AFAIK. You can use the "proxmox-boot-tool kernel pin" command in order to not have to select the kernel on each reboot.

Hello, any news and plans about this issue?
 
  • Like
Reactions: uncle.cripple

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!