Opt-in Linux 7.0 Kernel for Proxmox VE 9 available on test

t.lamprecht

Proxmox Staff Member
Staff member
Jul 28, 2015
6,805
4,992
315
South Tyrol/Italy
shop.proxmox.com
We recently uploaded the 7.0 (rc6) kernel to our repositories. The current default kernel for the Proxmox VE 9 series is still 6.17, but 7.0 is now an option.

We plan to use the 7.0 kernel as the new default for the upcoming Proxmox VE 9.2 and Proxmox Backup Server 4.2 releases planned later in Q2.
This follows our tradition of upgrading the Proxmox VE kernel to match the current Ubuntu version until we reach an Ubuntu LTS release, at which point we will only provide newer kernels as an opt-in option. The 7.0 kernel is based on the upcoming Ubuntu 26.04 Resolute release.

We have run this kernel on some of our test setups over the last few days without encountering any significant issues. However, for production setups, we strongly recommend either using the 6.17-based kernel or testing on similar hardware/setups before upgrading any production nodes to 7.0.

How to install:
  1. Ensure that the pve-test repository (or pbs-test for Proxmox Backup Server) is set up correctly.
    You can do so via CLI text editor or using the web UI under Node -> Repositories.
  2. Open a shell as root, e.g., through SSH or using the integrated shell on the web UI.
  3. apt update
  4. apt install proxmox-kernel-7.0
  5. reboot
Future updates to the 7.0 kernel will now be installed automatically when upgrading a node.

Please note:
  • The current 6.17 kernel is still supported and will stay the default kernel until further notice.
  • There were many changes for improved hardware support and performance improvements across the board.
    For a good overview of prominent changes, we recommend checking out the kernel-newbies site for 6.18, 6.19, and 7.0.
  • The kernel is also available on the test repositories of Proxmox Backup Server and Proxmox Mail Gateway, and in the test repo of Proxmox Datacenter Manager.
  • The new 7.0-based opt-in kernel will not be made available for the previous Proxmox VE 8 release series.
  • If you're unsure, we recommend continuing to use the 6.17-based kernel for now.

Feedback about how the new kernel performs in any of your setups is welcome!
Please provide basic details like CPU model, storage types used, ZFS as root file system, and the like, for both positive feedback or if you ran into issues where using the opt-in 7.0 kernel seems to be the likely cause.

Known Issues:
None at the time of writing.
 
Last edited:
First!
Just kidding, tested with one physical atm, works okay.
Linux SPS2 7.0.0-1-rc6-pve #1 SMP PREEMPT_DYNAMIC PMX 7.0.0-1~rc6+1 (2026-03-30T09:17Z) x86_64 GNU/Linux
 
Kernel seems to be working well so far! (Uptime is less than 15 minutes so... will report back later...)

System details:
System: Dell PowerEdge R740XD
CPU: Intel Xeon Gold 6154
ZFS as root-filesystem on SATA PM883's
ZFS as VM-storage filesystem running NVMe Kioxia CD8-R's.

The server does seem to be running a little bit less power-efficient, around ~15W higher idle power-consumption after all machines have been booted.
 
Three machines upgraded

CPUs: Ryzen 3600, Epyc Rome, Ryzen 5825U
Storage: ZFS across all three for root and data.

All smooth, no issues, everything operating as expected.
 
  • Like
Reactions: UdoB
runs fine on
AMD EPYC 9015
ASRockRack
TURIND8UD-2T/X550
zfs mirror
intel E810 nic
-------------------------
AMD EPYC 3151 4-Core Processor
GIGABYTE
MJ11-EC1-OT
zfs mirror
-------------------------
Intel(R) Pentium(R) CPU D1508 @ 2.20GHz
Supermicro
X10SDV-2C-TLN2F
zfs mirror
Intel Corporation Ethernet Connection X552/X557-AT 10GBASE-T
 
Last edited:
All fine, zfs as boot - mirror.

CPU(s):24 x Intel(R) Core(TM) Ultra 9 285K (1 Socket)

Kernel Version Linux 7.0.0-1-rc6-pve (2026-03-30T09:17Z)

Boot Mode EFI (Secure Boot)

Manager Version pve-manager/9.1.7/16b139a017452f16

Well done, thanks !
 
Running great on test lab, small cluster of Dell 7070.

CPU(s) - 8 x Intel(R) Core(TM) i7-9700T CPU @ 2.00GHz (1 Socket)
Kernel Version - Linux 7.0.0-1-rc6-pve (2026-03-30T09:17Z)
Boot Mode- EFI (Secure Boot)
Manager Version - pve-manager/9.1.7/16b139a017452f16

No ZFS here, all LVM (thin volume for containers and vms)

Everybodys hardware is different, we will expect to see different results. In My environment I see a CPU reduction, not an increase.

Thank you for Proxmox!
 
I ran a full set of tests today, and everything seems to be working fine.

No changes are apparent compared to before the update.

Performance improvement: None
Performance degradation: None
Increase in power consumption: None
Errors in the log: None

Very good

Code:
[CPU] Intel Core Ultra 7 265K
[MEM] Crucial CP2K48G56C46U5 x4
[MB] Asrock Z890 Pro RS WiFi White (Latest BIOS 3.24 2026/2/5)
[PCIE 1 x16] PowerColor Hellhound Spectral White AMD Radeon RX 9070 XT 16GB GDDR6 (Pass-through to a virtual machine)
[PCIE 2 x1] USB (Pass-through to a virtual machine)
[PCIE 3 x4] Broadcom HBA9500-16i (The following storage devices are connected)
[PCIE 4 x4] Intel X710-DA2
[M.2 Gen5 x4] WDS200T4X0E-EC (Pass-through to a virtual machine)

Code:
Storage
[Boot Volume] ZFS RAID0 KPM5XMUG400G x1
[Local VM Volume] ZFS RAID1 HUSMM3280ASS201 x2
[Disk] HUSMR3232ASS200 x2 (Pass-through to a virtual machine)
[Disk] WUSTM3216ASS200 x2 (Pass-through to a virtual machine)
[Disk] ST8000VN0022 x2 (Pass-through to a virtual machine)
[Truenas ZFS over iSCSI Volume] (proxmox-truenas-native plugin v1.0.113)

Code:
Package
proxmox-ve: 9.1.0 (running kernel: 7.0.0-1-rc6-pve)
pve-manager: 9.1.7 (running version: 9.1.7/16b139a017452f16)
proxmox-kernel-helper: 9.0.4
proxmox-kernel-7.0.0-1-rc6-pve-signed: 7.0.0-1~rc6+1
pve-edk2-firmware: 4.2025.05-2
qemu-server: 9.1.6
pve-qemu-kvm: 10.1.2-7
zfsutils-linux: 2.4.1-pve1
pve-firmware: 3.18-2

Code:
pveperf

// kernel 6.17
pveperf
CPU BOGOMIPS:      155136.00
REGEX/SECOND:      11154807
HD SIZE:           358.47 GB (rpool/ROOT/pve-1)
FSYNCS/SECOND:     9668.38
DNS EXT:           14.48 ms
DNS INT:           16.04 ms

// kernel 7
pveperf
CPU BOGOMIPS:      155136.00
REGEX/SECOND:      11224671
HD SIZE:           358.49 GB (rpool/ROOT/pve-1)
FSYNCS/SECOND:     9437.17
DNS EXT:           19.15 ms
DNS INT:           15.93 ms
 
Last edited:
After ~27h everything looks fine and within normal ranges (journalctl, dmesg, Zabbix monitoring stats)

Systems PVE:
- AMD EPYC 9474F 48C (Zen4), DDR5
- AMD EPYC 7402p 24C (Zen2), DDR4
- Intel XL710 10G NICs

VMs:
mostly OpenBSD
Linux
Windows 10

Configuration:
HA
ZFS Pool
Backup via Proxmox Backup



Systems PBS:
- AMD EPYC 7401p 24C (Zen1), DDR4
- AMD Ryzen 3 3200, 4C (Zen+), DDR4
- Intel XL710 10G NICs

Configuration:
ZFS Pool
 
Last edited:
The kernel is also available on the test and no-subscription repositories of Proxmox Backup Server and Proxmox Mail Gateway [...]

Copy + paste oversight, I guess?


Looking forward to the stable release and the (hopefully quick) push into the no-subscription repositories of it. :cool:
 
  • Like
Reactions: Johannes S
Copy + paste oversight, I guess?


Looking forward to the stable release and the (hopefully quick) push into the no-subscription repositories of it. :cool:
Yeah, fixed now, thx for noticing. As this is just opt-in for now we probably make it available on no-subscription relative soon though.
 
I can observe a high IO pressure stall percentage after switching to this kernel—consistently above 90%—but without any noticeable side effects.... all my servers seem to be affected
 
Last edited:
I can observe a high IO pressure stall percentage after switching to this kernel—consistently above 90%—but without any noticeable side effects.... all my servers seem to be affected
This is most likely stemming from the newer Qemu 10.2, which is also available on the test repo and is likely an accounting issue, but we're looking into that in any case.
 
Last edited: