Does the VM boot with the never version ofUSB device is Conbee II - it does work with pve-edk2-firmware=3.20220526-1
pve-edk2-firmware
if you remove the passthrough?Does the VM boot with the never version ofUSB device is Conbee II - it does work with pve-edk2-firmware=3.20220526-1
pve-edk2-firmware
if you remove the passthrough?No, it doesn'tDoes the VM boot with the never version ofpve-edk2-firmware
if you remove the passthrough?
Discussion moved to hereUSB device is Conbee II - it does work with pve-edk2-firmware=3.20220526-1
HA is installed by using script:
Code:bash -c "$(wget -qLO - https://github.com/tteck/Proxmox/raw/main/vm/haos-vm.sh)"
Issue should be reproducable:
1) I've installed proxmox 7.2 on new ssd
2) install HA from script above - VM boots and works ok
3) upgraded to the 7.4
4) VM doesn't boot any more: UEFI loop
Does it happen consistently every time you try to boot these VMs or only sometimes? Does downgrading QEMU make them work again? That is, usingI can reproduce this on a Windows10-vm and on a Windows11-vm.
Intel NUC7i5BNH - 4 x Intel(R) Core(TM) i5-7260U CPU @ 2.20GHz (1 Socket)
apt install pve-qemu-kvm=7.1.0-4
(or look at /var/log/apt/history.log
to check what version you had installed before) and then shutdown/stop and start the VMs again.root@pve2:~# apt update
Hit:1 http://deb.debian.org/debian bullseye-backports InRelease
Get:2 http://security.debian.org bullseye-security InRelease [48.4 kB]
Hit:3 http://alcateia.ufscar.br/debian bullseye InRelease
Get:4 http://alcateia.ufscar.br/debian bullseye-updates InRelease [44.1 kB]
Hit:5 http://download.proxmox.com/debian/pve bullseye InRelease
Fetched 92.4 kB in 5s (18.2 kB/s)
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
1 package can be upgraded. Run 'apt list --upgradable' to see it.
root@pve2:~# apt list --upgradable
Listing... Done
proxmox-ve/stable 7.4-1 all [upgradable from: 7.3-1]
N: There are 5 additional versions. Please use the '-a' switch to see them.
root@pve2:~# apt upgrade -y
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
Calculating upgrade... Done
The following packages have been kept back:
proxmox-ve
0 upgraded, 0 newly installed, 0 to remove and 1 not upgraded.
apt upgrade
apt full-upgrade
or: apt dist-upgrade
with Proxmox products: [1]!Only ever use:apt full-upgrade
or:apt dist-upgrade
with Proxmox products: [1]!
Maybe this solves your problem already...
With pve-edk2-firmware 3.20230228-1 the issue doesn't appear anymore.Does it happen consistently every time you try to boot these VMs or only sometimes? Does downgrading QEMU make them work again? That is, usingapt install pve-qemu-kvm=7.1.0-4
(or look at/var/log/apt/history.log
to check what version you had installed before) and then shutdown/stop and start the VMs again.
Glad to hear! But what about the Windows 10 VM? The config you posted doesn't even use EDK2/OVMF. If it works now, maybe it was just a one-off issue there?With pve-edk2-firmware 3.20230228-1 the issue doesn't appear anymore.
at least on a HP DL380 G8 it is necessary to add `intremap=off` to the kernel cmdline (else the console is terribly sluggish) - try that for the g7 as well (hit 'e' in the boot-screen 'Install Proxmox VE' and add it to the linux line at the end after the splash=silent)HP DL360 G7
Nope, doesn't help.at least on a HP DL380 G8 it is necessary to add `intremap=off` to the kernel cmdline (else the console is terribly sluggish) - try that for the g7 as well (hit 'e' in the boot-screen 'Install Proxmox VE' and add it to the linux line at the end after the splash=silent)
However we usually plug the ISO directly into the machine - as the iLO has caused some issues in the past - so not sure if sharing via iLO works well with the G7
I hope this helps!
Doesn't help.This is my workaround,try it:
GRUB_CMDLINE_LINUX_DEFAULT="console=ttyS1,115200 console=tty0 intel_iommu=off intremap=off"
Mar 31 10:40:04 proxmox-mon-01 pvedaemon[158819]: starting vnc proxy UPID:proxmox-mon-01:00026C63:0070E3BD:64269C64:vncproxy:101:root@pam:
Mar 31 10:40:04 proxmox-mon-01 pvedaemon[1304]: <root@pam> starting task UPID:proxmox-mon-01:00026C63:0070E3BD:64269C64:vncproxy:101:root@pam:
Mar 31 10:40:13 proxmox-mon-01 pveproxy[1310]: worker 154041 finished
Mar 31 10:40:13 proxmox-mon-01 pveproxy[1310]: starting 1 worker(s)
Mar 31 10:40:13 proxmox-mon-01 pveproxy[1310]: worker 158850 started
Mar 31 10:40:14 proxmox-mon-01 pvedaemon[158819]: connection timed out
Mar 31 10:40:14 proxmox-mon-01 pvedaemon[1304]: <root@pam> end task UPID:proxmox-mon-01:00026C63:0070E3BD:64269C64:vncproxy:101:root@pam: connection timed out
Mar 31 10:40:14 proxmox-mon-01 pveproxy[158849]: worker exit
boot: cdn
bootdisk: scsi0
cores: 2
cpu: host
ide0: local:101/vm-101-cloudinit.qcow2,media=cdrom
ide2: none,media=cdrom
ipconfig1: ip=HOSTIP/24,gw=GWIP
memory: 4096
name: test-iscsi2
net0: virtio=82:FC:B9:89:31:AC,bridge=vmbr0,link_down=1
net1: virtio=C2:36:0E:B1:A7:01,bridge=vmbr0,tag=VLANID
numa: 0
ostype: l26
protection: 1
scsi0: local:101/vm-101-disk-0.raw,discard=on,size=10G
scsihw: virtio-scsi-pci
smbios1: uuid=6967654a-06b4-4463-a28d-fc81290e8be0
sockets: 1
Try running pveam update. Any error messages?Sorry for the question, it's been a while since I tried proxmox, but why don't the turnkey templates appear anymore? Do I have to do something to add them or they will definitely not be there anymore? I'm using v7.4
@czechsys I have experienced similar sounding problems on a couple of HP servers (can't remember the models). Not a fix, but to get around it I ended up doing a minimal Debian install and installing Proxmox on top of that. Detals at https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_11_BullseyeDoesn't help.
It's even more crazy. Old 17" monitors are "Out of range", "No input supported", full HD same. Everything stops working after grub selection and before EULA screen.
PVE 7.1-2 iso works without hiccup.
yes, this is planned. It's theKeep up the good work!
One notable addition is the extended support of VM balancing. I understand that the balancing is triggered only under specific conditions such as node failure, resource/guest start-up and HA group changes. I am wondering if the VM balancing will be supported also on cases where all the nodes are healthy and the distribution of VMs/load is not even so as to automatically migrate VMs for improved load sharing between nodes.
Dynamic-Load scheduling mode
on the roadmap: https://pve.proxmox.com/wiki/Roadmap#Roadmap