Does your server have a BCM5720 network card? Then you might be having this issue BCM5720
Booting kernel 6.2.16-19-pve seems to be the only option at the moment.
I have a question regarding the change "Drop support for Ceph Octopus and Pacific, as they are not supported in Proxmox VE 8."
Does this mean that if we are running an external ceph cluster on Octopus (not Proxmox) Proxmox 8 will simply not work anymore?
And then we would have to upgrade that...
Sadly this still doesn't work for me with kernel 5.15.39-4-pve
From:
Intel(R) Xeon(R) Gold 6130 CPU @ 2.10GHz (latest bios available)
To:
Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz (latest bios available)
Causes the VM to hang with 100% CPU. Only fix is to reset (or cold migrate).
Same here for our Windows VM's. Some reboot, some don't.
Windows Server 2019 pc-i440fx-5.2 (SeaBIOS), VirtIO SCSI
Windows Server 2022 pc-i440fx-6.0 (OVMF UEFI), VirtIO SCSI
After som googling I probably need to convert the disk från GPT to MBR first
https://docs.microsoft.com/en-us/windows-server/storage/disk-management/change-a-gpt-disk-into-an-mbr-disk
I'm having issues with a Window Server 2022 VM running UEFI bios hanging when automatically rebooting after update. Only way to recover is to stop/start the VM.
I have other Window server VM's running SeaBios without any issues.
I'm guessing it's not as easy as just changing bios to SeaBios in...
I think that I had the same problem you have. The LVM was visible on the other hosts but not usable. Rebooting the other nodes fixed that problem.
And as was pointed out to me by bbgeek17. Using ISCSI+LVM on Proxmox cluster is not like using a shared filesystem like VMFS (ESXI, vsphere) or NFS...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.