Hi,
Please share the VM configuration
qm config <ID>
. Is there a publicly available ISO of the software available for testing?
Please also test with
pve-qemu-kvm=8.2.2-1
to see if the regression came between QEMU 8.1 and 8.2 or between QEMU 8.2 and 9.0.
If this is 32 bit, please try the
workaround mentioned here (adding
lm=off
to the CPU argument) to see if it has the same root cause.
Details below. The ISOs are available to anyone with an account at checkpoint.com and come with a trial license. I can provide the ISO if needed.
I'm now seeing something very odd. Normally, I use a KVM QCOW2 image provided by Check Point for fast deployments. Mainly designed for cloud deployments, but it works 100% reliably in Proxmox automated by Terraform -- I've been using it for almost exactly one year. I've brought up dozens of test environments in that time, easily 100+ VMs. I can spin that up now with pve-qemu-kvm 8 and it works, where it didn't with pve-qemu-kvm 9. I've also brought up at least 20 instances using fresh installs from ISO. However, if I try to boot from ISO now, it doesn't. It can see the media, but I Just get "boot:" like it can't find a bootable kernel. This works just fine on another Proxmox installation running QEMU 9 (though it exhibits the reboot issue as reported). If I upgrade the main Proxmox back to 9.0.0-6, I can boot the ISO, but the reboot issue occurs. If I downgrade to 8.1.5-6, the reboot issue disappears, but the ISO won't boot at all. The QCOW2 image boots either way, has no issues with 8.1.5-6 or 8.2.2-1, and fails with the reboot issue with 9.0.0-6.
Other ISOs work fine on either version, and no other VMs are having issues. I've already confirmed via checksum the ISOs are good. I'm at a bit of a loss.
VM config:
boot: order=scsi0;ide2;net0
cores: 8
cpu: host
ide2: iso:iso/Check_Point_R81.20_T631.iso,media=cdrom,size=4335904K
memory: 32768
meta: creation-qemu=8.1.5,ctime=1720806618
name: cust-mgmt
net0: virtio=BC:24:11:BD:38:24,bridge=vmbr30
numa: 0
ostype: l26
scsi0: vmpool:vm-103-disk-0,size=120G
smbios1: uuid=9d9fb94e-6762-4cb8-8497-cbb8cc712ae7
sockets: 1
tags: work
vmgenid: e75d0905-e97e-4f94-a908-a09264100139
--
I upgraded to 8.2.2-1 and the reboot issue did not return.
---
This is not 32-bit.
---
For fun, here is my pveversion as it stands:
root@vmhost:~# pveversion -v
proxmox-ve: 8.2.0 (running kernel: 6.8.8-2-pve)
pve-manager: 8.2.4 (running version: 8.2.4/faa83925c9641325)
proxmox-kernel-helper: 8.1.0
pve-kernel-6.2: 8.0.5
proxmox-kernel-6.8: 6.8.8-2
proxmox-kernel-6.8.8-2-pve-signed: 6.8.8-2
proxmox-kernel-6.8.8-1-pve-signed: 6.8.8-1
proxmox-kernel-6.5.13-5-pve-signed: 6.5.13-5
proxmox-kernel-6.5: 6.5.13-5
proxmox-kernel-6.2.16-20-pve: 6.2.16-20
proxmox-kernel-6.2: 6.2.16-20
pve-kernel-6.2.16-3-pve: 6.2.16-3
ceph-fuse: 17.2.6-pve1+3
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx8
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-4
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.1
libproxmox-backup-qemu0: 1.4.1
libproxmox-rs-perl: 0.3.3
libpve-access-control: 8.1.4
libpve-apiclient-perl: 3.3.2
libpve-cluster-api-perl: 8.0.7
libpve-cluster-perl: 8.0.7
libpve-common-perl: 8.2.1
libpve-guest-common-perl: 5.1.3
libpve-http-server-perl: 5.1.0
libpve-network-perl: 0.9.8
libpve-rs-perl: 0.8.9
libpve-storage-perl: 8.2.3
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 6.0.0-1
lxcfs: 6.0.0-pve2
novnc-pve: 1.4.0-3
proxmox-backup-client: 3.2.7-1
proxmox-backup-file-restore: 3.2.7-1
proxmox-firewall: 0.4.2
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.2.3
proxmox-mini-journalreader: 1.4.0
proxmox-widget-toolkit: 4.2.3
pve-cluster: 8.0.7
pve-container: 5.1.12
pve-docs: 8.2.2
pve-edk2-firmware: 4.2023.08-4
pve-esxi-import-tools: 0.7.1
pve-firewall: 5.0.7
pve-firmware: 3.12-1
pve-ha-manager: 4.0.5
pve-i18n: 3.2.2
pve-qemu-kvm: 8.1.5-6
pve-xtermjs: 5.3.0-3
qemu-server: 8.2.1
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.4-pve1