Does your server have a BCM5720 network card? Then you might be having this issue BCM5720
Booting kernel 6.2.16-19-pve seems to be the only option at the moment.
I have a question regarding the change "Drop support for Ceph Octopus and Pacific, as they are not supported in Proxmox VE 8."
Does this mean that if we are running an external ceph cluster on Octopus (not Proxmox) Proxmox 8 will simply not work anymore?
And then we would have to upgrade that...
Sadly this still doesn't work for me with kernel 5.15.39-4-pve
From:
Intel(R) Xeon(R) Gold 6130 CPU @ 2.10GHz (latest bios available)
To:
Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz (latest bios available)
Causes the VM to hang with 100% CPU. Only fix is to reset (or cold migrate).
Same here for our Windows VM's. Some reboot, some don't.
Windows Server 2019 pc-i440fx-5.2 (SeaBIOS), VirtIO SCSI
Windows Server 2022 pc-i440fx-6.0 (OVMF UEFI), VirtIO SCSI
After som googling I probably need to convert the disk från GPT to MBR first
https://docs.microsoft.com/en-us/windows-server/storage/disk-management/change-a-gpt-disk-into-an-mbr-disk
I'm having issues with a Window Server 2022 VM running UEFI bios hanging when automatically rebooting after update. Only way to recover is to stop/start the VM.
I have other Window server VM's running SeaBios without any issues.
I'm guessing it's not as easy as just changing bios to SeaBios in...
I think that I had the same problem you have. The LVM was visible on the other hosts but not usable. Rebooting the other nodes fixed that problem.
And as was pointed out to me by bbgeek17. Using ISCSI+LVM on Proxmox cluster is not like using a shared filesystem like VMFS (ESXI, vsphere) or NFS...
Been busy so I haven't had the time to continue testing this until now. But live migration doesn't seem to work on ISCSI lvm
can't activate LV '/dev/nimble-vg/vm-109-disk-0': Cannot process volume group nimble-vg
ERROR: online migrate failure - remote command failed with exit code 255...
Hi!
I'm trying to add a shared iscsi storage to our small cluster of three hosts. But i'm stuck at what feels like the last part.
I have made a LUN on our Nimble CS300 and successfully mounted the target on all three hosts as /dev/sdc
In the cluster GUI the iscsi target shows up happy on all...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.