QEMU 8.1 available on pvetest and pve-no-subscription as of now

t.lamprecht

Proxmox Staff Member
Staff member
Jul 28, 2015
6,327
2,965
303
South Tyrol/Italy
shop.proxmox.com
FYI: The next Proxmox VE point release 8.1 (2023/Q4) will default to QEMU 8.1.
Internal testing for that version started in July with the first release candidates, since then, our engineers have contributed several complex fixes to the stable release. Today our QEMU 8.1 package has been made available on the pvetest repository. This exact build of the QEMU package has been in QA for over two weeks, powering some of our internal production loads.

Note: While our internal workloads run stable, we do not recommend upgrading production setups just yet, that's why we initially made it available on the pvetest repository.
But, if you have test instances, face issues with older QEMU or QEMU/Kernel combinations that 8.1 might fix, or are just interested to evaluate it early, you now have the opportunity to upgrade early and provide feedback here.

To upgrade, ensure you have the Proxmox VE Test repositories configured (package is now available through the no-subscription repo).
Either use the web-interface, or a console with the following standard commands:
Bash:
apt update
apt full-upgrade

The pveversion -v (or the web-interface's Node Summary -> Packages versions) output should then include something like pve-qemu-kvm: 8.1.2-1

Note, as with all QEMU updates: A VM needs to be either fully restarted (shutdown/start or using restart via the CLI or web-interface) or, to avoid downtime, consider live-migrating to a host that has already been upgraded to the new QEMU.

While we successfully run our production and lots of testing loads on this version for some time now, no software is bug free, and often such issues are related to the specific setup. So, if you run into regressions that are definitively caused by installing the new QEMU version (and not some other change), please always include the affected VM config and some basic HW (e.g., CPU model) and storage details.

We'll update this thread once this version has been moved to no-subscription, which could happen as early as next week depending on the feedback.
 
Last edited:
Upgraded Proxmox Ceph HCI (v3) without any issues so far also with ceph 18.2
 
Thanks for your feedback!
Without any issue reported and internal production loads still running smoothly, we now moved the package to the pve-no-subscription repository.
 
Looks as if I bork the webgui...I ran
Makefile:
systemctl status pveproxy
and it's running yet I have no browser access and I cannot SSH in. I launch both VM, one pfSense and the other openmediavault bot can only access pfSense's webgui.

Before the no browser access to the webgui, I had launch openmediavault's webgui but the browser had filled in a saved user and password for the IP and it was incorrect...then the browser locked up. I clear cache but that didn't resolve...I also reboot the computer that I had been using to access the webgui but that didn't resolve the issue.
 
Looks as if I bork the webgui...I ran
Makefile:
systemctl status pveproxy
and it's running yet I have no browser access and I cannot SSH in. I launch both VM, one pfSense and the other openmediavault bot can only access pfSense's webgui.

Before the no browser access to the webgui, I had launch openmediavault's webgui but the browser had filled in a saved user and password for the IP and it was incorrect...then the browser locked up. I clear cache but that didn't resolve...I also reboot the computer that I had been using to access the webgui but that didn't resolve the issue.
Please open a new thread, this seems highly unlikely to be fallout from using QEMU 8.1..

For the new thread check the syslog (journalctl -b) for any errors and see if the pveproxy works locally (e.g., curl -k https://localhost:8006) and post any information you found from checking those.
 
Please open a new thread, this seems highly unlikely to be fallout from using QEMU 8.1..

For the new thread check the syslog (journalctl -b) for any errors and see if the pveproxy works locally (e.g., curl -k https://localhost:8006) and post any information you found from checking those.
It looks like it's false alarm and real problem is the Mac pro that I am using to access Proxmox is having network issue for unknown reason...will look into later. Proxmox computer is and has been running without issues.
 
  • Like
Reactions: RolandK
Even the next is kinda off topic:
Really really great support ! No Blabla or excuses, just open and helpful. Thanks guys (and girls)
 
Just tested the new KVM and a vSRX, which is based on FreeBSD 12.1 will not boot, here are a screenshot:

Bildschirmfoto 2023-11-20 um 21.22.37.png

And to verify the same boot with 8.0.2-7:

after.png

For me it looks like the "real memory" is the problem, but really don't know...

This is the configuration:

Code:
balloon: 0
boot: order=scsi0;ide2;net0
cores: 2
cpu: host
ide2: none,media=cdrom
machine: q35
memory: 4096
meta: creation-qemu=8.0.2,ctime=1692717742
name: vsrx
net0: virtio=A6:05:3D:xx:xx:xx,bridge=vmbr0,link_down=1
net1: virtio=EE:DF:D2:xx:xx:xx,bridge=vmbr1,tag=304
net2: virtio=2A:07:A2:xx:xx:xx,bridge=vmbr0
net3: virtio=AA:0B:D4:xx:xx:xx,bridge=vmbr2
numa: 0
onboot: 0
ostype: l26
scsi0: local-zfs:vm-100-disk-0,cache=writeback,discard=on,iothread=1,size=18438M,ssd=1
scsihw: virtio-scsi-single
serial0: socket
smbios1: uuid=xxxx
sockets: 1
vmgenid: xxxx

and pveversion -v:

Code:
proxmox-ve: 8.0.2 (running kernel: 6.5.11-3-pve)
pve-manager: 8.0.9 (running version: 8.0.9/fd1a0ae1b385cdcd)
pve-kernel-6.2: 8.0.5
proxmox-kernel-helper: 8.0.5
proxmox-kernel-6.5: 6.5.11-3
proxmox-kernel-6.5.11-3-pve: 6.5.11-3
proxmox-kernel-6.2.16-19-pve: 6.2.16-19
proxmox-kernel-6.2: 6.2.16-19
proxmox-kernel-6.2.16-15-pve: 6.2.16-15
proxmox-kernel-6.2.16-12-pve: 6.2.16-12
proxmox-kernel-6.2.16-8-pve: 6.2.16-8
pve-kernel-6.2.16-3-pve: 6.2.16-3
ceph-fuse: 17.2.7-pve1
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx6
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-4
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.0
libproxmox-backup-qemu0: 1.4.0
libproxmox-rs-perl: 0.3.1
libpve-access-control: 8.0.6
libpve-apiclient-perl: 3.3.1
libpve-common-perl: 8.0.10
libpve-guest-common-perl: 5.0.5
libpve-http-server-perl: 5.0.5
libpve-rs-perl: 0.8.7
libpve-storage-perl: 8.0.4
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 5.0.2-4
lxcfs: 5.0.3-pve3
novnc-pve: 1.4.0-2
proxmox-backup-client: 3.0.4-1
proxmox-backup-file-restore: 3.0.4-1
proxmox-kernel-helper: 8.0.5
proxmox-mail-forward: 0.2.1
proxmox-mini-journalreader: 1.4.0
proxmox-widget-toolkit: 4.1.1
pve-cluster: 8.0.5
pve-container: 5.0.5
pve-docs: 8.0.5
pve-edk2-firmware: 3.20230228-4
pve-firewall: 5.0.3
pve-firmware: 3.9-1
pve-ha-manager: 4.0.3
pve-i18n: 3.0.7
pve-qemu-kvm: 8.0.2-7
pve-xtermjs: 5.3.0-2
qemu-server: 8.0.8
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.0-pve3

If you need anything just let me know.
Thanks!
 
Hi,
Just tested the new KVM and a vSRX, which is based on FreeBSD 12.1 will not boot, here are a screenshot:

View attachment 58445

And to verify the same boot with 8.0.2-7:

View attachment 58446
So likely a QEMU regression.
For me it looks like the "real memory" is the problem, but really don't know...
Code:
balloon: 0
boot: order=scsi0;ide2;net0
cores: 2
cpu: host
ide2: none,media=cdrom
machine: q35
memory: 4096
meta: creation-qemu=8.0.2,ctime=1692717742
name: vsrx
net0: virtio=A6:05:3D:xx:xx:xx,bridge=vmbr0,link_down=1
net1: virtio=EE:DF:D2:xx:xx:xx,bridge=vmbr1,tag=304
net2: virtio=2A:07:A2:xx:xx:xx,bridge=vmbr0
net3: virtio=AA:0B:D4:xx:xx:xx,bridge=vmbr2
numa: 0
onboot: 0
ostype: l26
Might not be the cause of the issue, but OS type is set to Linux. What you can also try as workarounds is setting CPU type to x86-64-v2-AES or machine type to i440fx. As always, make sure you have a working backup first.
Code:
scsi0: local-zfs:vm-100-disk-0,cache=writeback,discard=on,iothread=1,size=18438M,ssd=1
scsihw: virtio-scsi-single
serial0: socket
smbios1: uuid=xxxx
sockets: 1
vmgenid: xxxx

and pveversion -v:

Code:
proxmox-ve: 8.0.2 (running kernel: 6.5.11-3-pve)
pve-manager: 8.0.9 (running version: 8.0.9/fd1a0ae1b385cdcd)
pve-kernel-6.2: 8.0.5
proxmox-kernel-helper: 8.0.5
proxmox-kernel-6.5: 6.5.11-3
proxmox-kernel-6.5.11-3-pve: 6.5.11-3
proxmox-kernel-6.2.16-19-pve: 6.2.16-19
proxmox-kernel-6.2: 6.2.16-19
proxmox-kernel-6.2.16-15-pve: 6.2.16-15
proxmox-kernel-6.2.16-12-pve: 6.2.16-12
proxmox-kernel-6.2.16-8-pve: 6.2.16-8
pve-kernel-6.2.16-3-pve: 6.2.16-3
ceph-fuse: 17.2.7-pve1
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx6
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-4
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.0
libproxmox-backup-qemu0: 1.4.0
libproxmox-rs-perl: 0.3.1
libpve-access-control: 8.0.6
libpve-apiclient-perl: 3.3.1
libpve-common-perl: 8.0.10
libpve-guest-common-perl: 5.0.5
libpve-http-server-perl: 5.0.5
libpve-rs-perl: 0.8.7
libpve-storage-perl: 8.0.4
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 5.0.2-4
lxcfs: 5.0.3-pve3
novnc-pve: 1.4.0-2
proxmox-backup-client: 3.0.4-1
proxmox-backup-file-restore: 3.0.4-1
proxmox-kernel-helper: 8.0.5
proxmox-mail-forward: 0.2.1
proxmox-mini-journalreader: 1.4.0
proxmox-widget-toolkit: 4.1.1
pve-cluster: 8.0.5
pve-container: 5.0.5
pve-docs: 8.0.5
pve-edk2-firmware: 3.20230228-4
pve-firewall: 5.0.3
pve-firmware: 3.9-1
pve-ha-manager: 4.0.3
pve-i18n: 3.0.7
pve-qemu-kvm: 8.0.2-7
pve-xtermjs: 5.3.0-2
qemu-server: 8.0.8
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.0-pve3

If you need anything just let me know.
Thanks!
what kind of host CPU do you have? I cannot reproduce the issue with a FreeBSD 12.1 ISO (installed with QEMU 8.0 then booted with QEMU 8.1). Is there a public (test) ISO available for vSRX somewhere?
 
Code:
ide2: none,media=cdrom
What if you delete the empty CD drive? With pve-qemu-kvm=8.1.2-2 and the empty drive, FreeBSD 12.1 does get stuck for me during boot. With pve-qemu-kvm=8.1.2-3 (build available on the test repository) it works.
 
What if you delete the empty CD drive? With pve-qemu-kvm=8.1.2-2 and the empty drive, FreeBSD 12.1 does get stuck for me during boot. With pve-qemu-kvm=8.1.2-3 (build available on the test repository) it works.
I, too, have the issue with vSRX. I even tried the latest 8.1.2-4, and still wont boot.

CPU(s): 8 x 11th Gen Intel(R) Core(TM) i5-1135G7 @ 2.40GHz (1 Socket) and 16 x 13th Gen Intel(R) Core(TM) i7-1360P (1 Socket)

Code:
proxmox-ve: 8.1.0 (running kernel: 6.5.11-4-pve)
pve-manager: 8.1.3 (running version: 8.1.3/b46aac3b42da5d15)
proxmox-kernel-helper: 8.0.9
pve-kernel-6.2: 8.0.5
proxmox-kernel-6.5.11-4-pve-signed: 6.5.11-4
proxmox-kernel-6.5: 6.5.11-4
proxmox-kernel-6.2.16-19-pve: 6.2.16-19
proxmox-kernel-6.2: 6.2.16-19
pve-kernel-6.2.16-3-pve: 6.2.16-3
ceph-fuse: 17.2.7-pve1
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx7
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-4
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.0
libproxmox-backup-qemu0: 1.4.0
libproxmox-rs-perl: 0.3.1
libpve-access-control: 8.0.7
libpve-apiclient-perl: 3.3.1
libpve-common-perl: 8.1.0
libpve-guest-common-perl: 5.0.6
libpve-http-server-perl: 5.0.5
libpve-network-perl: 0.9.4
libpve-rs-perl: 0.8.7
libpve-storage-perl: 8.0.5
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 5.0.2-4
lxcfs: 5.0.3-pve3
novnc-pve: 1.4.0-3
proxmox-backup-client: 3.0.4-1
proxmox-backup-file-restore: 3.0.4-1
proxmox-kernel-helper: 8.0.9
proxmox-mail-forward: 0.2.2
proxmox-mini-journalreader: 1.4.0
proxmox-widget-toolkit: 4.1.3
pve-cluster: 8.0.5
pve-container: 5.0.8
pve-docs: 8.1.3
pve-edk2-firmware: 4.2023.08-2
pve-firewall: 5.0.3
pve-firmware: 3.9-1
pve-ha-manager: 4.0.3
pve-i18n: 3.1.2
pve-qemu-kvm: 8.1.2-4
pve-xtermjs: 5.3.0-2
qemu-server: 8.0.10
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.0-pve3

Code:
balloon: 0
boot: order=ide2;scsi0;net0
cores: 2
cpu: x86-64-v2-AES
ide2: none,media=cdrom
memory: 4096
meta: creation-qemu=8.1.2,ctime=1700426227
name: TEST-SRX-WEST
net0: virtio=BC:24:11:9A:FE:06,bridge=vmbr0,firewall=1,tag=48
net1: virtio=BC:24:11:CF:6B:DE,bridge=vmbr0,tag=103
net2: virtio=BC:24:11:CE:4C:DE,bridge=vmbr0,tag=101
numa: 0
ostype: l26
scsi0: NFS-RS-VMs:143/vm-143-disk-0.qcow2,iothread=1,size=18438M
scsihw: virtio-scsi-single
serial0: socket
smbios1: uuid=c5bbb199-a926-4364-83e5-2d3b0d29dc3f
sockets: 1
vmgenid: 786592d2-8a5a-45b5-9fee-5b24738e0c21

Code:
Booting [/packages/sets/active/boot/os-kernel/kernel]...            
GDB: debug ports: uart
GDB: current port: uart
KDB: debugger backends: ddb gdb ndb
KDB: current backend: ddb
Copyright (c) 1992-2017 The FreeBSD Project.
Copyright (c) 1979, 1980, 1983, 1986, 1988, 1989, 1991, 1992, 1993, 1994
        The Regents of the University of California. All rights reserved.
FreeBSD is a registered trademark of The FreeBSD Foundation.
FreeBSD JNPR-11.0-20200908.87c9d89_builder_stable_11 #0 r356482+87c9d899138(HEAD): Tue Sep  8 11:40:21 PDT 2020
    builder@feyrith.juniper.net:/volume/build/junos/occam/llvm-5.0/sandbox-20200903/freebsd/stable_11/20200903.184927_builder_stable_11.87c9d89/obj/amd64/juniper/kernels/JNPR-AMD64-PRD/kernel amd64
Juniper clang version 5.0.2  (based on LLVM 5.0.2)
VT(vga): text 80x25
CPU: QEMU Virtual CPU version 2.5+ (2419.27-MHz K8-class CPU)
  Origin="GenuineIntel"  Id=0x60fb1  Family=0xf  Model=0x6b  Stepping=1
  Features=0x1783fbfd<FPU,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,MMX,FXSR,SSE,SSE2,HTT>
  Features2=0x82b82201<SSE3,SSSE3,CX16,SSE4.1,SSE4.2,x2APIC,POPCNT,AESNI,HV>
  AMD Features=0x20100800<SYSCALL,NX,LM>
  AMD Features2=0x1<LAHF>
Hypervisor: Origin = "KVMKVMKVM"
real memory  = 5368709120 (5120 MB)
avail memory = 4074000384 (3885 MB)
mtx_platform_early_bootinit: M/T/EX/SRX Series Early Boot Initialization
kernel trap 12 with interrupts disabled


Fatal trap 12: page fault while in kernel mode
cpuid = 0; apic id = 00
fault virtual address   = 0x0
fault code              = supervisor read data, page not present
instruction pointer     = 0x20:0xffffffff804f71a0
stack pointer           = 0x28:0xffffffff83276ba0
frame pointer           = 0x28:0xffffffff83276ba0
code segment            = base 0x0, limit 0xfffff, type 0x1b
                        = DPL 0, pres 1, long 1, def32 0, gran 1
processor eflags        = resume, IOPL = 0
current process         = 0 ()
trap number             = 12
panic: page fault
cpuid = 0
Uptime: 1s

Going back to pve-kvm-qemu 8.0.2-7 does resolve the issue.

Code:
root> show version
Model: vSRX
Junos: 20.3R1.8
JUNOS OS Kernel 64-bit XEN [20200908.87c9d89_builder_stable_11]
JUNOS OS libs [20200908.87c9d89_builder_stable_11]
JUNOS OS runtime [20200908.87c9d89_builder_stable_11]
JUNOS OS time zone information [20200908.87c9d89_builder_stable_11]
JUNOS OS libs compat32 [20200908.87c9d89_builder_stable_11]
JUNOS OS 32-bit compatibility [20200908.87c9d89_builder_stable_11]
JUNOS py extensions2 [20200921.081424_builder_junos_203_r1]
JUNOS py extensions [20200921.081424_builder_junos_203_r1]
JUNOS py base2 [20200921.081424_builder_junos_203_r1]
JUNOS py base [20200921.081424_builder_junos_203_r1]
JUNOS OS vmguest [20200908.87c9d89_builder_stable_11]
JUNOS OS support utilities [20200908.87c9d89_builder_stable_11]
JUNOS OS crypto [20200908.87c9d89_builder_stable_11]
JUNOS OS boot-ve files [20200908.87c9d89_builder_stable_11]
JUNOS network stack and utilities [20200921.081424_builder_junos_203_r1]
JUNOS libs [20200921.081424_builder_junos_203_r1]
JUNOS libs compat32 [20200921.081424_builder_junos_203_r1]
JUNOS runtime [20200921.081424_builder_junos_203_r1]
JUNOS na telemetry [20.3R1.8]
JUNOS Web Management Platform Package [20200921.081424_builder_junos_203_r1]
JUNOS vsrx modules [20200921.081424_builder_junos_203_r1]
JUNOS srx libs compat32 [20200921.081424_builder_junos_203_r1]
JUNOS srx runtime [20200921.081424_builder_junos_203_r1]
JUNOS srx platform support [20200921.081424_builder_junos_203_r1]
JUNOS common platform support [20200921.081424_builder_junos_203_r1]
JUNOS vsrx runtime [20200921.081424_builder_junos_203_r1]
JUNOS probe utility [20200921.081424_builder_junos_203_r1]
JUNOS pppoe [20200921.081424_builder_junos_203_r1]
JUNOS Openconfig [20.3R1.8]
JUNOS mtx network modules [20200921.081424_builder_junos_203_r1]
JUNOS modules [20200921.081424_builder_junos_203_r1]
JUNOS srx libs [20200921.081424_builder_junos_203_r1]
JUNOS hsm [20200921.081424_builder_junos_203_r1]
JUNOS srx Data Plane Crypto Support [20200921.081424_builder_junos_203_r1]
JUNOS daemons [20200921.081424_builder_junos_203_r1]
JUNOS srx daemons [20200921.081424_builder_junos_203_r1]
JUNOS cloud init [20200921.081424_builder_junos_203_r1]
JUNOS SRX TVP AppQos Daemon [20200921.081424_builder_junos_203_r1]
JUNOS Extension Toolkit [20200921.081424_builder_junos_203_r1]
JUNOS Juniper Malware Removal Tool (JMRT) [1.0.0+20200921.081424_builder_junos_203_r1]
JUNOS J-Insight [20200921.081424_builder_junos_203_r1]
JUNOS Online Documentation [20200921.081424_builder_junos_203_r1]
JUNOS jail runtime [20200908.87c9d89_builder_stable_11]
JUNOS FIPS mode utilities [20200921.081424_builder_junos_203_r1]

I did try to remove the CD drive on the latest update, and that did not help.

You can register for a trial of vSRX at: https://www.juniper.net/us/en/dm/download-next-gen-vsrx-firewall-trial.html

It will allow you to download the qcow2 image. There is no ISO.
 
Last edited:
Hi,

found the problem:
https://www.mail-archive.com/qemu-devel@nongnu.org/msg971771.html

Qemu changed the default SMBIOS version to 3.0 with 64 Bit, which breaks all Juniper products as they set the system information in SMBIOS at boot.

A workaround is to set "args: -machine q35,smbios-entry-point-type=32" in the config file, then the VM boots again without problems (as the SMBIOS runs in 32 Bit mode).

Could you maybe add a switch in the webinterface to change the SMBIOS to 32 bit?

Thanks,
Christian
 
Hi,

found the problem:
https://www.mail-archive.com/qemu-devel@nongnu.org/msg971771.html

Qemu changed the default SMBIOS version to 3.0 with 64 Bit, which breaks all Juniper products as they set the system information in SMBIOS at boot.

A workaround is to set "args: -machine q35,smbios-entry-point-type=32" in the config file, then the VM boots again without problems (as the SMBIOS runs in 32 Bit mode).

Could you maybe add a switch in the webinterface to change the SMBIOS to 32 bit?

Thanks,
Christian
Thank you for this! I just tested on pve-qemu-kvm 8.1.2-4 with "args: -machine pc,smbios-entry-point-type=32" since I was using i440fx instead of q35, and I can confirm that the vSRX is now booting.
 
  • Like
Reactions: miniprox

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!