Memory hotplug prevents VM boot

Golum

Member
Apr 18, 2022
30
3
8
Just upgraded my testlab to latest pve no-sub. I did not install updates for 3-6 months, so no idea which one broke it.

VM boots and crashes with kernel trace, as soon as I disable memory hotplug it boots just fine.

Using debian cloud generic image.

Code:
[    4.912084] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015
[    4.912084] RIP: 0010:iowrite8+0x9/0x60
[    4.912084] Code: be 2d bb 48 89 04 24 e8 33 a8 37 00 0f 0b 48 8b 04 24 48 83 c4 08 c3 cc cc cc cc 66 0f 1f 44 00 00 48 81 fe ff ff 03 00 76 08 <40> 88 3e c3 cc cc cc cc 48 81 fe 00 00 01 00 76 0b 0f b7 d6 89 f8
[    4.912084] RSP: 0018:ffffad38801afc00 EFLAGS: 00010292
[    4.912084] RAX: ffffad388008d000 RBX: ffffad38801afc88 RCX: 0000000000000000
[    4.912084] RDX: 000000000000002f RSI: ffffad388008d014 RDI: 0000000000000000
[    4.912084] RBP: ffff929cf689a800 R08: ffff929cf72391af R09: 0000000000000000
[    4.912084] R10: 0000000000000000 R11: ffff929cf72391af R12: 0000000000000000
[    4.912084] R13: ffff929cf689a810 R14: 0000000000000002 R15: 0000000000000000
[    4.912084] FS:  00007f76411958c0(0000) GS:ffff929cfe400000(0000) knlGS:0000000000000000
[    4.912084] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[    4.912084] CR2: ffffad388008d014 CR3: 00000000053d0006 CR4: 0000000000370eb0
[    4.912084] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[    4.912084] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[    4.912084] Call Trace:
[    4.912084]  vp_reset+0x1b/0x50 [virtio_pci]
[    4.912084]  register_virtio_device+0x75/0x120 [virtio]
[    4.912084]  virtio_pci_probe+0xb3/0x150 [virtio_pci]
[    4.912084]  local_pci_probe+0x3f/0x80
[    4.912084]  ? _cond_resched+0x16/0x50
[    4.912084]  pci_device_probe+0x101/0x1b0
[    4.912084]  really_probe+0xe2/0x460
[    4.912084]  driver_probe_device+0xe5/0x150
[    4.912084]  device_driver_attach+0xa9/0xb0
[    4.912084]  __driver_attach+0xb5/0x170
[    4.912084]  ? device_driver_attach+0xb0/0xb0
[    4.912084]  ? device_driver_attach+0xb0/0xb0
[    4.912084]  bus_for_each_dev+0x75/0xc0
[    4.912084]  bus_add_driver+0x13a/0x200
[    4.912084]  driver_register+0x8b/0xe0
[    4.912084]  ? 0xffffffffc02ad000
[    4.912084]  do_one_initcall+0x41/0x1d0
[    4.912084]  ? do_init_module+0x23/0x250
[    4.912084]  ? kmem_cache_alloc_trace+0xf5/0x200
[    4.912084]  do_init_module+0x4c/0x250
[    4.912084]  __do_sys_finit_module+0xb1/0x120
[    4.912084]  do_syscall_64+0x30/0x40
[    4.912084]  entry_SYSCALL_64_after_hwframe+0x61/0xc6
[    4.912084] RIP: 0033:0x7f764164c2e9
[    4.912084] Code: 00 c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d 77 8b 0d 00 f7 d8 64 89 01 48
[    4.912084] RSP: 002b:00007ffc91faacf8 EFLAGS: 00000246 ORIG_RAX: 0000000000000139
[    4.912084] RAX: ffffffffffffffda RBX: 00005610242a8840 RCX: 00007f764164c2e9
[    4.912084] RDX: 0000000000000000 RSI: 00007f76417e9e2d RDI: 0000000000000005
[    4.912084] RBP: 0000000000020000 R08: 0000000000000000 R09: 00005610242a7280
[    4.912084] R10: 0000000000000005 R11: 0000000000000246 R12: 00007f76417e9e2d
[    4.912084] R13: 0000000000000000 R14: 00005610242aa120 R15: 00005610242a8840
[    4.912084] Modules linked in: crc32c_intel virtio_pci(+) virtio_ring virtio
[    4.912084] CR2: ffffad388008d014
[    4.912084] ---[ end trace 4198e3111dfb2f38 ]---
[    4.912084] RIP: 0010:iowrite8+0x9/0x60
[    4.912084] Code: be 2d bb 48 89 04 24 e8 33 a8 37 00 0f 0b 48 8b 04 24 48 83 c4 08 c3 cc cc cc cc 66 0f 1f 44 00 00 48 81 fe ff ff 03 00 76 08 <40> 88 3e c3 cc cc cc cc 48 81 fe 00 00 01 00 76 0b 0f b7 d6 89 f8
[    4.912084] RSP: 0018:ffffad38801afc00 EFLAGS: 00010292
[    4.912084] RAX: ffffad388008d000 RBX: ffffad38801afc88 RCX: 0000000000000000
[    4.912084] RDX: 000000000000002f RSI: ffffad388008d014 RDI: 0000000000000000
[    4.912084] RBP: ffff929cf689a800 R08: ffff929cf72391af R09: 0000000000000000
[    4.912084] R10: 0000000000000000 R11: ffff929cf72391af R12: 0000000000000000
[    4.912084] R13: ffff929cf689a810 R14: 0000000000000002 R15: 0000000000000000
[    4.912084] FS:  00007f76411958c0(0000) GS:ffff929cfe400000(0000) knlGS:0000000000000000
[    4.912084] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[    4.912084] CR2: ffffad388008d014 CR3: 00000000053d0006 CR4: 0000000000370eb0
[    4.912084] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[    4.912084] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
 
Last edited:
The VM only has 5GB of memory and it worked before so something with qemu must have broken again.

I will post the vm config later.
 
Last edited:
Hi,
I'll see if I can reproduce it once you post the config. Can you re-install the older QEMU and see if it works with that, apt install pve-qemu-kvm=X.Y.Z-V (replacing the capital letters with the version numbers). You can check /var/log/apt/history.log to see what the old version was and please also tell us what the new version is.
 
Hi,
I'll see if I can reproduce it once you post the config. Can you re-install the older QEMU and see if it works with that, apt install pve-qemu-kvm=X.Y.Z-V (replacing the capital letters with the version numbers). You can check /var/log/apt/history.log to see what the old version was and please also tell us what the new version is.

I tried downgrading to 7.0.0-4 and 6.2.0-11 but that did not make a difference.

Could it be another package ? qemu-server, etc ?


To reproduce on proxmox zfs install:
Code:
cd /tmp
wget https://cloud.debian.org/images/cloud/bullseye/20230124-1270/debian-11-generic-amd64-20230124-1270.qcow2

VM_ID=$(pvesh get /cluster/nextid)
qm create $VM_ID \
-name debian-cloud \
--cpu host --numa 1 \
--cores 1 --sockets 1 --vcpus 1 \
--memory 1024 --balloon 512 \
--net0 virtio,bridge=vmbr0 \
--hotplug disk,network,usb,memory,cpu \
--bios ovmf --machine q35 --ostype l26 \
--scsihw virtio-scsi-pci \
--serial0 socket \
--vga none

qm set $VM_ID --efidisk0 local-zfs:0
qm set $VM_ID --ide2 local-zfs:cloudinit
qm importdisk $VM_ID /tmp/debian-11-generic-amd64-20230124-1270.qcow2 local-zfs
qm set $VM_ID --scsi0 local-zfs:vm-$VM_ID-disk-1,discard=on,ssd=1
qm set $VM_ID --boot order=scsi0

Start the VM and open xterm serial console.

VM does not complete boot with memory hotplug enabled.

Stop VM disable memory hotplug and it boots until login screen / cloud init finish message.

VM config:
Code:
agent: 0
autostart: 0
balloon: 512
bios: ovmf
boot: order=scsi0
cores: 1
cpu: host
efidisk0: local-zfs:vm-107-disk-0,size=1M
hotplug: disk,network,usb,memory,cpu
ide2: local-zfs:vm-107-cloudinit,media=cdrom
machine: q35
memory: 1024
name: debian-cloud
net0: virtio=9A:02:33:93:23:35,bridge=vmbr0
numa: 1
ostype: l26
scsi0: local-zfs:vm-107-disk-1,discard=on,size=2G,ssd=1
scsihw: virtio-scsi-pci
serial0: socket
smbios1: uuid=fe6f43ce-25be-4dec-81b1-614d10df5699
sockets: 1
vcpus: 1
vga: none
vmgenid: bacb4248-38c5-4c38-825d-d18feb60de8b

Upgrade log:
Code:
Start-Date: 2023-02-11  20:42:00
Commandline: apt full-upgrade -y
Install: proxmox-mail-forward:amd64 (0.1.1-1, automatic), pve-kernel-5.15.85-1-pve:amd64 (5.15.85-1, automatic)
Upgrade: pve-docs:amd64 (7.2-2, 7.3-1), libcurl4:amd64 (7.74.0-1.3+deb11u3, 7.74.0-1.3+deb11u5), krb5-locales:amd64 (1.18.3-6+deb11u2, 1.18.3-6+deb11u3), bind9-host:amd64 (1:9.16.33-1~deb11u1, 1:9.16.37-1~deb11u1), libgssapi-krb5-2:amd64 (1.18.3-6+deb11u2, 1.18.3-6+deb11u3), libcurl3-gnutls:amd64 (7.74.0-1.3+deb11u3, 7.74.0-1.3+deb11u5), proxmox-widget-toolkit:amd64 (3.5.1, 3.5.5), libpve-rs-perl:amd64 (0.7.2, 0.7.3), corosync:amd64 (3.1.5-pve2, 3.1.7-pve1), libnftables1:amd64 (0.9.8-3.1, 0.9.8-3.1+deb11u1), pve-firmware:amd64 (3.5-6, 3.6-3), git:amd64 (1:2.30.2-1, 1:2.30.2-1+deb11u1), tzdata:amd64 (2021a-1+deb11u7, 2021a-1+deb11u8), zfs-zed:amd64 (2.1.6-pve1, 2.1.9-pve1), libtasn1-6:amd64 (4.16.0-2, 4.16.0-2+deb11u1), zfs-initramfs:amd64 (2.1.6-pve1, 2.1.9-pve1), spl:amd64 (2.1.6-pve1, 2.1.9-pve1), pve-qemu-kvm:amd64 (7.0.0-4, 7.1.0-4), libnvpair3linux:amd64 (2.1.6-pve1, 2.1.9-pve1), libproxmox-acme-perl:amd64 (1.4.2, 1.4.3), libpve-cluster-api-perl:amd64 (7.2-2, 7.3-2), pve-ha-manager:amd64 (3.4.0, 3.5.1), libexpat1:amd64 (2.2.10-2+deb11u4, 2.2.10-2+deb11u5), grub-pc-bin:amd64 (2.06-3~deb11u2, 2.06-3~deb11u5), lxcfs:amd64 (4.0.12-pve1, 5.0.3-pve1), swtpm-libs:amd64 (0.7.1~bpo11+1, 0.8.0~bpo11+2), swtpm-tools:amd64 (0.7.1~bpo11+1, 0.8.0~bpo11+2), libuutil3linux:amd64 (2.1.6-pve1, 2.1.9-pve1), libpve-storage-perl:amd64 (7.2-10, 7.3-2), libtiff5:amd64 (4.2.0-1+deb11u1, 4.2.0-1+deb11u3), libpixman-1-0:amd64 (0.40.0-1, 0.40.0-1.1~deb11u1), libzpool5linux:amd64 (2.1.6-pve1, 2.1.9-pve1), libpve-guest-common-perl:amd64 (4.1-4, 4.2-3), libvotequorum8:amd64 (3.1.5-pve2, 3.1.7-pve1), libkrb5support0:amd64 (1.18.3-6+deb11u2, 1.18.3-6+deb11u3), libquorum5:amd64 (3.1.5-pve2, 3.1.7-pve1), swtpm:amd64 (0.7.1~bpo11+1, 0.8.0~bpo11+2), libxml2:amd64 (2.9.10+dfsg-6.7+deb11u2, 2.9.10+dfsg-6.7+deb11u3), pve-cluster:amd64 (7.2-2, 7.3-2), binfmt-support:amd64 (2.2.1-1, 2.2.1-1+deb11u1), proxmox-ve:amd64 (7.2-1, 7.3-1), lxc-pve:amd64 (5.0.0-3, 5.0.2-1), libcmap4:amd64 (3.1.5-pve2, 3.1.7-pve1), proxmox-backup-file-restore:amd64 (2.2.7-1, 2.3.3-1), libcfg7:amd64 (3.1.5-pve2, 3.1.7-pve1), libkrb5-3:amd64 (1.18.3-6+deb11u2, 1.18.3-6+deb11u3), qemu-server:amd64 (7.2-4, 7.3-3), libpve-access-control:amd64 (7.2-4, 7.3-1), pve-container:amd64 (4.2-3, 4.4-2), libproxmox-acme-plugins:amd64 (1.4.2, 1.4.3), libcpg4:amd64 (3.1.5-pve2, 3.1.7-pve1), pve-i18n:amd64 (2.7-2, 2.8-2), bind9-dnsutils:amd64 (1:9.16.33-1~deb11u1, 1:9.16.37-1~deb11u1), proxmox-offline-mirror-helper:amd64 (0.4.0-1, 0.5.1-1), base-files:amd64 (11.1+deb11u5, 11.1+deb11u6), libk5crypto3:amd64 (1.18.3-6+deb11u2, 1.18.3-6+deb11u3), proxmox-archive-keyring:amd64 (2.1, 2.2), libtpms0:amd64 (0.9.2~bpo11+1, 0.9.5~bpo11+1), libssl-dev:amd64 (1.1.1n-0+deb11u3, 1.1.1n-0+deb11u4), proxmox-backup-client:amd64 (2.2.7-1, 2.3.3-1), distro-info-data:amd64 (0.51+deb11u2, 0.51+deb11u3), mariadb-common:amd64 (1:10.5.15-0+deb11u1, 1:10.5.18-0+deb11u1), grub-efi-amd64-bin:amd64 (2.06-3~deb11u2, 2.06-3~deb11u5), grub2-common:amd64 (2.06-3~deb11u2, 2.06-3~deb11u5), libpve-http-server-perl:amd64 (4.1-4, 4.1-5), libssl1.1:amd64 (1.1.1n-0+deb11u3, 1.1.1n-0+deb11u4), pve-manager:amd64 (7.2-11, 7.3-6), libpve-common-perl:amd64 (7.2-3, 7.3-2), nano:amd64 (5.4-2+deb11u1, 5.4-2+deb11u2), grub-common:amd64 (2.06-3~deb11u2, 2.06-3~deb11u5), bind9-libs:amd64 (1:9.16.33-1~deb11u1, 1:9.16.37-1~deb11u1), libmariadb3:amd64 (1:10.5.15-0+deb11u1, 1:10.5.18-0+deb11u1), sudo:amd64 (1.9.5p2-3, 1.9.5p2-3+deb11u1), librados2-perl:amd64 (1.2-1, 1.3-1), pve-kernel-5.15:amd64 (7.2-13, 7.3-2), libzfs4linux:amd64 (2.1.6-pve1, 2.1.9-pve1), libksba8:amd64 (1.5.0-3+deb11u1, 1.5.0-3+deb11u2), libexpat1-dev:amd64 (2.2.10-2+deb11u4, 2.2.10-2+deb11u5), curl:amd64 (7.74.0-1.3+deb11u3, 7.74.0-1.3+deb11u5), libvirglrenderer1:amd64 (0.8.2-5, 0.8.2-5+deb11u1), pve-firewall:amd64 (4.2-6, 4.2-7), libcorosync-common4:amd64 (3.1.5-pve2, 3.1.7-pve1), libnozzle1:amd64 (1.24-pve1, 1.24-pve2), git-man:amd64 (1:2.30.2-1, 1:2.30.2-1+deb11u1), libknet1:amd64 (1.24-pve1, 1.24-pve2), grub-pc:amd64 (2.06-3~deb11u2, 2.06-3~deb11u5), dnsutils:amd64 (1:9.16.33-1~deb11u1, 1:9.16.37-1~deb11u1), pve-kernel-helper:amd64 (7.2-13, 7.3-4), zfsutils-linux:amd64 (2.1.6-pve1, 2.1.9-pve1), postfix:amd64 (3.5.13-0+deb11u1, 3.5.17-0+deb11u1), openssl:amd64 (1.1.1n-0+deb11u3, 1.1.1n-0+deb11u4), proxmox-offline-mirror-docs:amd64 (0.4.0-1, 0.5.1-1), libpve-cluster-perl:amd64 (7.2-2, 7.3-2), nftables:amd64 (0.9.8-3.1, 0.9.8-3.1+deb11u1), linux-libc-dev:amd64 (5.10.149-2, 5.10.162-1)
End-Date: 2023-02-11  20:44:13
 
Could it be another package ? qemu-server, etc ?
Kernel would be interesting, you can simply (install and) boot an older one.

qemu-server is more unlikely to make a difference, but if you really want to make sure, you could use qm showcmd <ID> --pretty with an old and new version (should be the same in all the relevant place, otherwise it's a bug, please tell us :))

To reproduce on proxmox zfs install:
Code:
cd /tmp
wget https://cloud.debian.org/images/cloud/bullseye/20230124-1270/debian-11-generic-amd64-20230124-1270.qcow2

VM_ID=$(pvesh get /cluster/nextid)
qm create $VM_ID \
-name debian-cloud \
--cpu host --numa 1 \
--cores 1 --sockets 1 --vcpus 1 \
--memory 1024 --balloon 512 \
--net0 virtio,bridge=vmbr0 \
--hotplug disk,network,usb,memory,cpu \
--bios ovmf --machine q35 --ostype l26 \
--scsihw virtio-scsi-pci \
--serial0 socket \
--vga none

qm set $VM_ID --efidisk0 local-zfs:0
qm set $VM_ID --ide2 local-zfs:cloudinit
qm importdisk $VM_ID /tmp/debian-11-generic-amd64-20230124-1270.qcow2 local-zfs
qm set $VM_ID --scsi0 local-zfs:vm-$VM_ID-disk-1,discard=on,ssd=1
qm set $VM_ID --boot order=scsi0

Start the VM and open xterm serial console.

VM does not complete boot with memory hotplug enabled.
Unfortunately, I cannot reproduce the issue. It works for me using pve-qemu-kvm=7.1.0-4 and kernel 5.15.85-1. Thank you for providing the commands, makes life easy!
 
Kernel would be interesting, you can simply (install and) boot an older one.

qemu-server is more unlikely to make a difference, but if you really want to make sure, you could use qm showcmd <ID> --pretty with an old and new version (should be the same in all the relevant place, otherwise it's a bug, please tell us :))


Unfortunately, I cannot reproduce the issue. It works for me using pve-qemu-kvm=7.1.0-4 and kernel 5.15.85-1. Thank you for providing the commands, makes life easy!

Did you finish the VM boot, did it end up in login screen / cloud-init finish ?

With my recent test it still shows the kernel crash but continues boot until some timeout happens and it drops you to initramfs.


I will do some more tests later tonight.
 
I tried using the 5.13.19 kernel, but still does not work.

There is a issue here that comes very close https://github.com/cloud-hypervisor/cloud-hypervisor/issues/456

It says to set "-cpu host-phys-bits" on qemu to fix it, but if i try that via args in the qm.conf it does not seem to be set, since proxmox already sets "-cpu host..."

Need to know where that happens to merge them.

Full boot log as attachment.
 

Attachments

  • boot_log.txt
    63.3 KB · Views: 5
Did you finish the VM boot, did it end up in login screen / cloud-init finish ?
Yes.

Does it work if you use kvm64 as the CPU model?

I tried using the 5.13.19 kernel, but still does not work.

There is a issue here that comes very close https://github.com/cloud-hypervisor/cloud-hypervisor/issues/456

It says to set "-cpu host-phys-bits" on qemu to fix it, but if i try that via args in the qm.conf it does not seem to be set, since proxmox already sets "-cpu host..."

Need to know where that happens to merge them.

Full boot log as attachment.
IIRC, if there's a duplicate argument, the one coming via args wins (because it is appended later), but you need to specify the full one, i.e. -cpu host,host-phys-bits. But there should be no need for args. You can also just add it to your cpu argument in the config via qm set <ID> --cpu host,phys-bits=host (it's not exposed in the UI).

There were actually relevant changes in qemu-server:
https://git.proxmox.com/?p=qemu-server.git;a=commit;h=33b0d3b7bee12c49f3d0b7d53699b5874ab8eb73
https://git.proxmox.com/?p=qemu-server.git;a=commit;h=305e9cec5d74610650aa627176d048b1c7e44877

Please try apt install qemu-server=7.2-8 to see if that makes a difference. But if using QEMU 7.1, you might run into the failing check that the commit is fixing.
 
The output of qm showcmd <ID> --pretty would also be interesting.
 
I already tried kvm64 as cpu type, but that makes no difference.

With qemu-server=7.2-8 I get the following error on VM start in Proxmox:

Code:
kvm: Address space limit 0xffffffffff < 0x4487fffffff phys-bits too low (40)
TASK ERROR: start failed: QEMU exited with code 1

showcmd:
Code:
root@testlab:~# qm showcmd 107 --pretty
/usr/bin/kvm \
  -id 107 \
  -name 'debian-cloud,debug-threads=on' \
  -no-shutdown \
  -chardev 'socket,id=qmp,path=/var/run/qemu-server/107.qmp,server=on,wait=off' \
  -mon 'chardev=qmp,mode=control' \
  -chardev 'socket,id=qmp-event,path=/var/run/qmeventd.sock,reconnect=5' \
  -mon 'chardev=qmp-event,mode=control' \
  -pidfile /var/run/qemu-server/107.pid \
  -daemonize \
  -smbios 'type=1,uuid=f11ec7bc-6319-4b6e-bac0-fea3484ab6d1' \
  -drive 'if=pflash,unit=0,format=raw,readonly=on,file=/usr/share/pve-edk2-firmware//OVMF_CODE.fd' \
  -drive 'if=pflash,unit=1,format=raw,id=drive-efidisk0,size=131072,file=/dev/zvol/rpool/data/vm-107-disk-0' \
  -smp '1,sockets=1,cores=1,maxcpus=1' \
  -nodefaults \
  -boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' \
  -vga none \
  -nographic \
  -cpu host,+kvm_pv_eoi,+kvm_pv_unhalt \
  -m 'size=1024,slots=255,maxmem=4194304M' \
  -object 'memory-backend-ram,id=ram-node0,size=1024M' \
  -numa 'node,nodeid=0,cpus=0,memdev=ram-node0' \
  -object 'memory-backend-ram,id=mem-dimm0,size=512M' \
  -device 'pc-dimm,id=dimm0,memdev=mem-dimm0,node=0' \
  -object 'memory-backend-ram,id=mem-dimm1,size=512M' \
  -device 'pc-dimm,id=dimm1,memdev=mem-dimm1,node=0' \
  -readconfig /usr/share/qemu-server/pve-q35-4.0.cfg \
  -device 'vmgenid,guid=cb3693d0-7296-4995-b48c-1423f351422a' \
  -device 'usb-tablet,id=tablet,bus=ehci.0,port=1' \
  -chardev 'socket,id=serial0,path=/var/run/qemu-server/107.serial0,server=on,wait=off' \
  -device 'isa-serial,chardev=serial0' \
  -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3,free-page-reporting=on' \
  -iscsi 'initiator-name=iqn.1993-08.org.debian:01:672667203266' \
  -drive 'file=/dev/zvol/rpool/data/vm-107-cloudinit,if=none,id=drive-ide2,media=cdrom,aio=io_uring' \
  -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2' \
  -device 'virtio-scsi-pci,id=scsihw0,bus=pci.0,addr=0x5' \
  -drive 'file=/dev/zvol/rpool/data/vm-107-disk-1,if=none,id=drive-scsi0,discard=on,format=raw,cache=none,aio=io_uring,detect-zeroes=unmap' \
  -device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,rotation_rate=1,bootindex=100' \
  -netdev 'type=tap,id=net0,ifname=tap107i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on' \
  -device 'virtio-net-pci,mac=92:D5:60:E6:8F:59,netdev=net0,bus=pci.0,addr=0x12,id=net0,rx_queue_size=1024,tx_queue_size=1024' \
  -machine 'type=q35+pve0'
 
Thank you @fiona

Downgrading both makes it work again ! :)

Code:
root@testlab:~# apt list -u
pve-qemu-kvm/stable 7.1.0-4 amd64 [upgradable from: 7.0.0-4]
qemu-server/stable 7.3-3 amd64 [upgradable from: 7.2-8]

VM boots fine and hotplug works as well.
 
Last edited:
showcmd:
Code:
root@testlab:~# qm showcmd 107 --pretty
  -m 'size=1024,slots=255,maxmem=4194304M' \
I assume this is the output with qemu-server=7.2-8 installed? What is the maxmem value with the current one?

Can you share the output of cat /proc/cpuinfo?

With current packages installed, does using qm set 107 --cpu host,phys-bits=host work?
 
I assume this is the output with qemu-server=7.2-8 installed? What is the maxmem value with the current one?
with latest it's -m 'size=1024,slots=255,maxmem=524288M'

With current packages installed, does using qm set 107 --cpu host,phys-bits=host work?
it's set but vm does not boot with it e.g. pretty output -cpu 'host,+kvm_pv_eoi,+kvm_pv_unhalt,host-phys-bits=true' \


cpuinfo:
(nested kvm with cpu set to Skylake-Client-IBRS)

Code:
root@testlab:~# cat /proc/cpuinfo
processor       : 0
vendor_id       : GenuineIntel
cpu family      : 6
model           : 94
model name      : Intel Core Processor (Skylake, IBRS)
stepping        : 3
microcode       : 0x1
cpu MHz         : 3599.998
cache size      : 16384 KB
physical id     : 0
siblings        : 14
core id         : 0
cpu cores       : 7
apicid          : 0
initial apicid  : 0
fpu             : yes
fpu_exception   : yes
cpuid level     : 13
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single pti ssbd ibrs ibpb tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx rdseed adx smap clflushopt xsaveopt xsavec xgetbv1 xsaves arat umip
vmx flags       : vnmi preemption_timer invvpid ept_x_only ept_ad ept_1gb flexpriority tsc_offset vtpr mtf vapic ept vpid unrestricted_guest shadow_vmcs pml
bugs            : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs taa itlb_multihit srbds mmio_stale_data retbleed
bogomips        : 7199.99
clflush size    : 64
cache_alignment : 64
address sizes   : 40 bits physical, 48 bits virtual
power management:

<repeats for 8 cores>
 
Sorry about the delay!

it's set but vm does not boot with it e.g. pretty output -cpu 'host,+kvm_pv_eoi,+kvm_pv_unhalt,host-phys-bits=true' \
There is host-phys-bits=true so the option is applied ;)

cpuinfo:
(nested kvm with cpu set to Skylake-Client-IBRS)
Since this is nested, how are you running the outer i.e. L1 guest?

Code:
address sizes   : 40 bits physical, 48 bits virtual
Is the 40 bits physical the actual limitation of the CPU or is that because how it is run on L1? If you can change it to have at least 43 physical bits, then qemu-server will try to use 4TiB again and QEMU 7.1 won't fail the check.
 
Since this is nested, how are you running the outer i.e. L1 guest?

L0 uses Debian 11 with custom Kernel 5.15.95 and virt-manager qemu/kvm

This is L1 Proxmox:
/usr/bin/qemu-system-x86_64 -name guest=proxmox,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-2-proxmox/master-key.aes -machine pc-q35-3.1,accel=kvm,usb=off,vmport=off,dump-guest-core=off -cpu Skylake-Client-IBRS,ss=on,vmx=on,pdcm=on,hypervisor=on,tsc_adjust=on,clflushopt=on,umip=on,ssbd=on,xsaves=on,pdpe1gb=on,ibpb=on,amd-ssbd=on -drive file=/usr/share/OVMF/OVMF_CODE.fd,if=pflash,format=raw,unit=0,readonly=on -drive file=/var/lib/libvirt/qemu/nvram/proxmox_VARS.fd,if=pflash,format=raw,unit=1 -m 40960 -realtime mlock=off -smp 14,maxcpus=16,sockets=1,cores=8,threads=2 -uuid 6566f999-b6bd-49a6-8295-3ec4f2a89a0e -no-user-config -nodefaults -chardev socket,id=charmonitor,fd=27,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc,driftfix=slew -global kvm-pit.lost_tick_policy=delay -no-hpet -no-shutdown -global ICH9-LPC.disable_s3=1 -global ICH9-LPC.disable_s4=1 -boot strict=on -device pcie-root-port,port=0x10,chassis=1,id=pci.1,bus=pcie.0,multifunction=on,addr=0x2 -device pcie-root-port,port=0x11,chassis=2,id=pci.2,bus=pcie.0,addr=0x2.0x1 -device pcie-root-port,port=0x12,chassis=3,id=pci.3,bus=pcie.0,addr=0x2.0x2 -device pcie-root-port,port=0x13,chassis=4,id=pci.4,bus=pcie.0,addr=0x2.0x3 -device pcie-root-port,port=0x14,chassis=5,id=pci.5,bus=pcie.0,addr=0x2.0x4 -device pcie-root-port,port=0x15,chassis=6,id=pci.6,bus=pcie.0,addr=0x2.0x5 -device pcie-root-port,port=0x16,chassis=7,id=pci.7,bus=pcie.0,addr=0x2.0x6 -device qemu-xhci,p2=15,p3=15,id=usb,bus=pci.2,addr=0x0 -device virtio-scsi-pci,id=scsi0,bus=pci.4,addr=0x0 -device virtio-serial-pci,id=virtio-serial0,bus=pci.3,addr=0x0 -drive file=/media/images/proxmox-ve_7.0-1.iso,format=raw,if=none,id=drive-sata0-0-0,media=cdrom,readonly=on -device ide-cd,bus=ide.0,drive=drive-sata0-0-0,id=sata0-0-0,bootindex=2 -drive file=/var/lib/libvirt/images/proxmox.qcow2,format=qcow2,if=none,id=drive-scsi0-0-0-1 -device scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=1,drive=drive-scsi0-0-0-1,id=scsi0-0-0-1,bootindex=1 -netdev tap,fd=30,id=hostnet0,vhost=on,vhostfd=31 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:3d:51:fe,bus=pci.1,addr=0x0 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -chardev socket,id=charchannel0,fd=32,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 -chardev spicevmc,id=charchannel1,name=vdagent -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=com.redhat.spice.0 -device usb-tablet,id=input0,bus=usb.0,port=1 -spice port=5901,addr=127.0.0.1,disable-ticketing,image-compression=off,seamless-migration=on -device qxl-vga,id=video0,ram_size=67108864,vram_size=67108864,vram64_size_mb=0,vgamem_mb=16,max_outputs=1,bus=pcie.0,addr=0x1 -device ich9-intel-hda,id=sound0,bus=pcie.0,addr=0x1b -device hda-duplex,id=sound0-codec0,bus=sound0.0,cad=0 -chardev spicevmc,id=charredir0,name=usbredir -device usb-redir,chardev=charredir0,id=redir0,bus=usb.0,port=2 -chardev spicevmc,id=charredir1,name=usbredir -device usb-redir,chardev=charredir1,id=redir1,bus=usb.0,port=3 -device virtio-balloon-pci,id=balloon0,bus=pci.5,addr=0x0 -object rng-random,id=objrng0,filename=/dev/urandom -device virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.6,addr=0x0 -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny -msg timestamp=on

Is the 40 bits physical the actual limitation of the CPU or is that because how it is run on L1? If you can change it to have at least 43 physical bits, then qemu-server will try to use 4TiB again and QEMU 7.1 won't fail the check.

The real CPU has
address sizes : 39 bits physical, 48 bits virtual
 
The real CPU has
address sizes : 39 bits physical, 48 bits virtual
Can you try setting pyhs-bits for the L1 guest to 39 (or lower) then? I mean if the L1 guest reports 40 but the CPU actually only has 39 that smells fishy to me.
 
Can you try setting pyhs-bits for the L1 guest to 39 (or lower) then? I mean if the L1 guest reports 40 but the CPU actually only has 39 that smells fishy to me.

The CPU was actually set to "Copy host CPU configuration", in the virsh xml file it's set to "<cpu mode='host-model' check='partial'>"

Was not able to change the physical bits right now, will try later again.
 
I can't change the physical bits value, even with host-passtrough, which now shows the real CPU in the VM it's always 40.

Setting it requires libvirt 8.7.0 https://libvirt.org/formatdomain.html #maxphysaddr which is only available starting with debian bookworm which is still in development.

I will just stick to qemu 7.0
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!