Proxmox VE 7.4 released!

Awesome Work! The dark mode is amazing! Thank you for all the hard work you're all doing!!
 
Cool
The upgrade does automatic on "normally" update (PVE 6.x) and the dark mode is very nice.
Thx
 
I run proxmox in a single GPU passthrough configuration with a Windows 11 workstation. Upgrade to 7.4 went smooth and everything works... but I swear it seems like my experience is just slower across the board. Enough so that I felt like I should post something here. Nothing interesting in /var/log/messages pointing to any problems.

Hardware:
  • Intel Core i9-10850K 3.6 GHz 10-Core Processor
  • Asus ROG STRIX Z490-E GAMING ATX LGA1200 Motherboard
  • G.Skill Ripjaws V 64 GB (4 x 16 GB) DDR4-3600 CL16 Memory

Code:
IOMMU Group 0 00:00.0 Host bridge [0600]: Intel Corporation Device [8086:9b33] (rev 05)
IOMMU Group 1 00:01.0 PCI bridge [0604]: Intel Corporation 6th-10th Gen Core Processor PCIe Controller (x16) [8086:1901] (rev 05)
IOMMU Group 1 01:00.0 VGA compatible controller [0300]: NVIDIA Corporation Device [10de:2489] (rev a1)
IOMMU Group 1 01:00.1 Audio device [0403]: NVIDIA Corporation GA104 High Definition Audio Controller [10de:228b] (rev a1)
IOMMU Group 2 00:02.0 Display controller [0380]: Intel Corporation CometLake-S GT2 [UHD Graphics 630] [8086:9bc5] (rev 05)
IOMMU Group 3 00:14.0 USB controller [0c03]: Intel Corporation Comet Lake USB 3.1 xHCI Host Controller [8086:06ed]
IOMMU Group 3 00:14.2 RAM memory [0500]: Intel Corporation Comet Lake PCH Shared SRAM [8086:06ef]
IOMMU Group 4 00:15.0 Serial bus controller [0c80]: Intel Corporation Comet Lake PCH Serial IO I2C Controller #0 [8086:06e8]
IOMMU Group 4 00:15.1 Serial bus controller [0c80]: Intel Corporation Comet Lake PCH Serial IO I2C Controller #1 [8086:06e9]
IOMMU Group 5 00:16.0 Communication controller [0780]: Intel Corporation Comet Lake HECI Controller [8086:06e0]
IOMMU Group 6 00:17.0 SATA controller [0106]: Intel Corporation Device [8086:06d2]
IOMMU Group 7 00:1b.0 PCI bridge [0604]: Intel Corporation Comet Lake PCI Express Root Port #17 [8086:06c0] (rev f0)
IOMMU Group 8 00:1b.4 PCI bridge [0604]: Intel Corporation Comet Lake PCI Express Root Port #21 [8086:06ac] (rev f0)
IOMMU Group 9 00:1c.0 PCI bridge [0604]: Intel Corporation Device [8086:06b8] (rev f0)
IOMMU Group 10 00:1c.4 PCI bridge [0604]: Intel Corporation Device [8086:06bc] (rev f0)
IOMMU Group 11 00:1c.5 PCI bridge [0604]: Intel Corporation Device [8086:06bd] (rev f0)
IOMMU Group 12 00:1c.6 PCI bridge [0604]: Intel Corporation Device [8086:06be] (rev f0)
IOMMU Group 13 00:1c.7 PCI bridge [0604]: Intel Corporation Device [8086:06bf] (rev f0)
IOMMU Group 14 00:1d.0 PCI bridge [0604]: Intel Corporation Comet Lake PCI Express Root Port #9 [8086:06b0] (rev f0)
IOMMU Group 15 00:1f.0 ISA bridge [0601]: Intel Corporation Device [8086:0685]
IOMMU Group 15 00:1f.4 SMBus [0c05]: Intel Corporation Comet Lake PCH SMBus Controller [8086:06a3]
IOMMU Group 15 00:1f.5 Serial bus controller [0c80]: Intel Corporation Comet Lake PCH SPI Controller [8086:06a4]
IOMMU Group 16 03:00.0 Non-Volatile memory controller [0108]: Samsung Electronics Co Ltd NVMe SSD Controller SM981/PM981/PM983 [144d:a808]
IOMMU Group 17 05:00.0 Ethernet controller [0200]: Intel Corporation Ethernet Controller I225-V [8086:15f3] (rev 02)
IOMMU Group 18 06:00.0 SATA controller [0106]: ASMedia Technology Inc. ASM1062 Serial ATA Controller [1b21:0612] (rev 02)
IOMMU Group 19 07:00.0 USB controller [0c03]: Renesas Technology Corp. uPD720201 USB 3.0 Host Controller [1912:0014] (rev 03)
IOMMU Group 20 08:00.0 USB controller [0c03]: Renesas Technology Corp. uPD720201 USB 3.0 Host Controller [1912:0014] (rev 03)
IOMMU Group 21 09:00.0 Non-Volatile memory controller [0108]: Phison Electronics Corporation E12 NVMe Controller [1987:5012] (rev 01)

root@pve:~# cat /proc/cmdline
initrd=\EFI\proxmox\5.15.102-1-pve\initrd.img-5.15.102-1-pve root=ZFS=rpool/ROOT/pve-1 boot=zfs intel_iommu=on iommu=pt initcall_blacklist=sysfb_init

root@pve:~# cat /etc/modprobe.d/vfio.conf
options vfio-pci ids=10de:2489,10de:228b,1987:5012,1912:0014,1b21:0612 disable_vga=1

root@pve:~# cat /etc/pve/qemu-server/100.conf
agent: 1
balloon: 0
bios: ovmf
boot: order=hostpci0
cores: 8
cpu: host
efidisk0: local-zfs:vm-100-disk-1,efitype=4m,pre-enrolled-keys=1,size=1M
hostpci0: 0000:09:00,pcie=1
hostpci1: 0000:01:00,pcie=1,x-vga=1,romfile=08G-P5-3663-KL.rom
hostpci2: 0000:08:00,pcie=1
hostpci3: 0000:07:00,pcie=1
hostpci4: 0000:06:00,pcie=1
machine: pc-q35-7.2
memory: 16384
meta: creation-qemu=6.1.1,ctime=1648601835
name: raven
net0: virtio=5E:42:11:7E:16:FA,bridge=vmbr0
numa: 0
onboot: 1
ostype: win11
scsihw: virtio-scsi-pci
smbios1: uuid=5d6ae88a-76b6-4ab0-8aac-9ae36dcb518c
sockets: 1
tablet: 0
tpmstate0: local-zfs:vm-100-disk-0,size=4M,version=v2.0
vga: none
vmgenid: f3f742fa-3520-452f-98d6-1c6c5fbb2fb5
 
Last edited:
Since upgrade, the Win10 VM (the VNC console) is black. On Windows start the Windows symbol is displayed. But after start is black.
CPU load: 0.3%
Any ideas?

Solved:
I restored the version from 23.3.
 
Last edited:
  • Like
Reactions: Ralli
I run proxmox in a single GPU passthrough configuration with a Windows 11 workstation. Upgrade to 7.4 went smooth and everything works... but I swear it seems like my experience is just slower across the board. Enough so that I felt like I should post something here. Nothing interesting in /var/log/messages pointing to any problems.

Hardware:
  • Intel Core i9-10850K 3.6 GHz 10-Core Processor
  • Asus ROG STRIX Z490-E GAMING ATX LGA1200 Motherboard
  • G.Skill Ripjaws V 64 GB (4 x 16 GB) DDR4-3600 CL16 Memory
The 6.2 kernel feels faster to me... but I'm not running anything fancy like you are.
 
Unable to add 2nd IDE CD-ROM Device:

kvm: -device ide-cd,bus=ide.0,unit=1,drive=drive-ide1,id=ide1,bootindex=101: Can't create IDE unit 1, bus supports only 1 units

Looks like VM config got "confused" I removed both CD-ROM Devices and added them again, and changed Machine Type to q35 Version 7.1 works OK now.
would you like to file a bugreport at proxmox bugzilla for that @isi , so that this won't get lost ?
 
After the upgrade, a Win10 VM stopped running.
After restoring a 2 day old version, it ran.
But then a Debian Bullseye CT did not work anymore.
After update & reboot it worked again.
Now a CT with Debian Lenny did not work anymore.
That is so far!
 
Not sure if this is just me but I’m having issues to access the Web GUI mobile view after updating. The UI is totally broken. If I request the “desktop version” then it works fine (including dark theme). I tried clearing the cache on my iPhone but it didn’t work. This is a pve server upgraded from the previous version.

Screenshot:
2EE8CFF9-86AB-4399-BDB0-137B86560C10.png

When requesting desktop view:
FF0EFF7B-8373-4298-B005-3421F2D28DC2.png
 
Since upgrade, the Win10 VM (the VNC console) is black. On Windows start the Windows symbol is displayed. But after start is black.
CPU load: 0.3%
Any ideas?

Solved:
I restored the version from 23.3.
I can confirm this.

Can you please post more details like what hardware (especially CPU) & kernel is in use and also the full VM config (qm config VMID) and also full host version pveversion -v.

As I re-checked through the Windows 10 installations that I could get my hands on relatively quickly (on Intel Alder Lake with 6.2, on Xeon v3 with both 6.2 and 5.15 and on 1st gen EPYC on 5.15) and there it worked OK
The VMs came up fine and graphics are fully functional.
 
Not sure if this is just me but I’m having issues to access the Web GUI mobile view after updating. The UI is totally broken. If I request the “desktop version” then it works fine (including dark theme). I tried clearing the cache on my iPhone but it didn’t work. This is a pve server upgraded from the previous version.
This is tracked here: https://bugzilla.proxmox.com/show_bug.cgi?id=4612
 
  • Like
Reactions: Broken64
Can you please post more details like what hardware (especially CPU) & kernel is in use and also the full VM config (qm config VMID) and also full host version pveversion -v.

As I re-checked through the Windows 10 installations that I could get my hands on relatively quickly (on Intel Alder Lake with 6.2, on Xeon v3 with both 6.2 and 5.15 and on 1st gen EPYC on 5.15) and there it worked OK
The VMs came up fine and graphics are fully functional.
I can reproduce this on a Windows10-vm and on a Windows11-vm.

Intel NUC7i5BNH - 4 x Intel(R) Core(TM) i5-7260U CPU @ 2.20GHz (1 Socket)

Windows10:
Code:
agent: 1,fstrim_cloned_disks=1
boot: order=sata1;ide2
cores: 4
description: IP dynamisch
ide2: DataCenter:iso/virtio-win.iso,media=cdrom,size=519030K
memory: 8192
name: vWin10Edu
net0: virtio=xx,bridge=vmbr0,firewall=1
sata1: local-zfs:vm-303-disk-0,size=40G,ssd=1
vmgenid: ...

Windows11:
Code:
agent: 1,fstrim_cloned_disks=1
bios: ovmf
boot: order=ide2;sata1
cores: 4
cpu: host
description: IP dynamisch%0AUpgrade von vWin10Edu
efidisk0: local-zfs:vm-305-disk-2,efitype=4m,pre-enrolled-keys=1,size=1M
ide2: DataCenter:iso/virtio-win.iso,media=cdrom,size=519030K
machine: pc-q35-7.0
memory: 8192
name: vWin11Edu
net0: virtio=xx,bridge=vmbr0,firewall=1
numa: 0
ostype: win11
sata1: local-zfs:vm-305-disk-0,size=128G,ssd=1
smbios1: uuid=...
sockets: 1
tpmstate0: local-zfs:vm-305-disk-1,size=4M,version=v2.0
vmgenid: ...

pveversion -v
Code:
proxmox-ve: 7.4-1 (running kernel: 5.15.102-1-pve)
pve-manager: 7.4-3 (running version: 7.4-3/9002ab8a)
pve-kernel-5.15: 7.3-3
pve-kernel-5.15.102-1-pve: 5.15.102-1
pve-kernel-5.15.85-1-pve: 5.15.85-1
pve-kernel-5.15.30-2-pve: 5.15.30-3
ceph-fuse: 15.2.16-pve1
corosync: 3.1.7-pve1
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve2
libproxmox-acme-perl: 1.4.4
libproxmox-backup-qemu0: 1.3.1-1
libproxmox-rs-perl: 0.2.1
libpve-access-control: 7.4-2
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.3-3
libpve-guest-common-perl: 4.2-4
libpve-http-server-perl: 4.2-1
libpve-rs-perl: 0.7.5
libpve-storage-perl: 7.4-2
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.2-2
lxcfs: 5.0.3-pve1
novnc-pve: 1.4.0-1
proxmox-backup-client: 2.3.3-1
proxmox-backup-file-restore: 2.3.3-1
proxmox-kernel-helper: 7.4-1
proxmox-mail-forward: 0.1.1-1
proxmox-mini-journalreader: 1.3-1
proxmox-offline-mirror-helper: 0.5.1-1
proxmox-widget-toolkit: 3.6.3
pve-cluster: 7.3-3
pve-container: 4.4-3
pve-docs: 7.4-2
pve-edk2-firmware: 3.20221111-2
pve-firewall: 4.3-1
pve-firmware: 3.6-4
pve-ha-manager: 3.6.0
pve-i18n: 2.11-1
pve-qemu-kvm: 7.2.0-8
pve-xtermjs: 4.16.0-1
qemu-server: 7.4-2
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.8.0~bpo11+3
vncterm: 1.7-1
zfsutils-linux: 2.1.9-pve1
 
Last edited:
Hi!
VM (HomeAssistant) doesn't boot after upgrade to 7.4 (UEFI boot loop)
When KVM hardware virtualization is set to Off - VM starts boot but the process doesn't go to the end and login prompt doesn't appear - VM reboot and reboot again.
 
Hi,
Hi!
VM (HomeAssistant) doesn't boot after upgrade to 7.4 (UEFI boot loop)
When KVM hardware virtualization is set to Off - VM starts boot but the process doesn't go to the end and login prompt doesn't appear - VM reboot and reboot again.
please share the output of pveversion -v and the VM configuration qm config <ID> with the ID of the VM. Does it work with either apt install pve-edk2-firmware=3.20230228-1 (currently available on the no-subscription repository) or apt install pve-edk2-firmware=3.20220526-1 installed?
 
  • Like
Reactions: bryq
Hi,

please share the output of pveversion -v and the VM configuration qm config <ID> with the ID of the VM. Does it work with either apt install pve-edk2-firmware=3.20230228-1 (currently available on the no-subscription repository) or apt install pve-edk2-firmware=3.20220526-1 installed?
root@proxmox:~# pveversion -v
proxmox-ve: 7.4-1 (running kernel: 5.15.102-1-pve)
pve-manager: 7.4-3 (running version: 7.4-3/9002ab8a)
pve-kernel-5.15: 7.3-3
pve-kernel-5.13: 7.1-9
pve-kernel-5.4: 6.4-11
pve-kernel-5.15.102-1-pve: 5.15.102-1
pve-kernel-5.15.85-1-pve: 5.15.85-1
pve-kernel-5.15.83-1-pve: 5.15.83-1
pve-kernel-5.15.74-1-pve: 5.15.74-1
pve-kernel-5.15.64-1-pve: 5.15.64-1
pve-kernel-5.15.60-1-pve: 5.15.60-1
pve-kernel-5.15.39-3-pve: 5.15.39-3
pve-kernel-5.15.39-1-pve: 5.15.39-1
pve-kernel-5.15.35-2-pve: 5.15.35-5
pve-kernel-5.13.19-6-pve: 5.13.19-15
pve-kernel-5.13.19-2-pve: 5.13.19-4
pve-kernel-5.4.157-1-pve: 5.4.157-1
pve-kernel-5.4.106-1-pve: 5.4.106-1
ceph-fuse: 15.2.15-pve1
corosync: 3.1.7-pve1
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown: 0.8.36+pve2
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve2
libproxmox-acme-perl: 1.4.4
libproxmox-backup-qemu0: 1.3.1-1
libproxmox-rs-perl: 0.2.1
libpve-access-control: 7.4-2
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.3-3
libpve-guest-common-perl: 4.2-4
libpve-http-server-perl: 4.2-1
libpve-rs-perl: 0.7.5
libpve-storage-perl: 7.4-2
libqb0: 1.0.5-1
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.2-2
lxcfs: 5.0.3-pve1
novnc-pve: 1.4.0-1
proxmox-backup-client: 2.3.3-1
proxmox-backup-file-restore: 2.3.3-1
proxmox-kernel-helper: 7.4-1
proxmox-mail-forward: 0.1.1-1
proxmox-mini-journalreader: 1.3-1
proxmox-offline-mirror-helper: 0.5.1-1
proxmox-widget-toolkit: 3.6.4
pve-cluster: 7.3-3
pve-container: 4.4-3
pve-docs: 7.4-2
pve-edk2-firmware: 3.20230228-1
pve-firewall: 4.3-1
pve-firmware: 3.6-4
pve-ha-manager: 3.6.0
pve-i18n: 2.11-1
pve-qemu-kvm: 7.2.0-8
pve-xtermjs: 4.16.0-1
qemu-server: 7.4-2
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.8.0~bpo11+3
vncterm: 1.7-1
zfsutils-linux: 2.1.9-pve1

root@proxmox:~# qm config 100
agent: 1
bios: ovmf
boot: order=scsi0
cores: 2
efidisk0: local-lvm:vm-100-disk-0,size=4M
ide2: none,media=cdrom
kvm: 1
memory: 8008
name: HomeAssistant
net0: virtio=96:42:EC:72:AD:D8,bridge=vmbr0,firewall=1
numa: 0
ostype: l26
scsi0: local-lvm:vm-100-disk-1,size=120G
scsihw: virtio-scsi-pci
smbios1: uuid=116e4510-a251-4c1c-958b-5cfe7871d089
sockets: 1
usb0: host=1cf1:0030
vmgenid: 6a57cd49-fe5b-41a6-8f5c-dcf57c2fb153

Yes it does work on pve-edk2-firmware=3.20220526-1 ! Thanks!
 
usb0: host=1cf1:0030
What kind of device are you passing through? Does it work with the new pve-edk2-firmware package if you remove the passthrough? How did you set up/install the HomeAssistant VM? I haven't been able to reproduce the issue here yet unfortunately.
 
What kind of device are you passing through? Does it work with the new pve-edk2-firmware package if you remove the passthrough? How did you set up/install the HomeAssistant VM? I haven't been able to reproduce the issue here yet unfortunately.
USB device is Conbee II - it does work with pve-edk2-firmware=3.20220526-1
HA is installed by using script:
Code:
bash -c "$(wget -qLO - https://github.com/tteck/Proxmox/raw/main/vm/haos-vm.sh)"

Issue should be reproducable:
1) I've installed proxmox 7.2 on new ssd
2) install HA from script above - VM boots and works ok
3) upgraded to the 7.4
4) VM doesn't boot any more: UEFI loop
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!