Opt-in Linux 7.0 Kernel for Proxmox VE 9 available on test and no-subscription

After updating my test-installation of PBS to the new 7.0 kernel, I see the following error from time to time:

Code:
[ 6741.128241] ------------[ cut here ]------------
[ 6741.128252] [CRTC:36:crtc-0] vblank wait timed out
[ 6741.128258] WARNING: drivers/gpu/drm/drm_atomic_helper.c:1921 at drm_atomic_helper_wait_for_vblanks.part.0+0x240/0x260, CPU#0: kworker/0:0/6947
[ 6741.128281] Modules linked in: nfsv3 nfs_acl rpcsec_gss_krb5 auth_rpcgss nfsv4 nfs lockd grace netfs bonding tls sunrpc binfmt_misc aesni_intel pcspkr vmgenid bochs input_leds joydev mac_hid sch_fq_codel efi_pstore nfnetlink vsock_loopback vmw_vsock_virtio_transport_common vmw_vsock_vmci_transport vsock vmw_vmci dmi_sysfs qemu_fw_cfg ip_tables x_tables autofs4 hid_generic usbhid hid zfs(PO) spl(O) btrfs libblake2b xor raid6_pq psmouse serio_raw i2c_piix4 i2c_smbus uhci_hcd ehci_pci ehci_hcd pata_acpi floppy
[ 6741.128340] CPU: 0 UID: 0 PID: 6947 Comm: kworker/0:0 Tainted: P           O        7.0.0-3-pve #1 PREEMPT(lazy)
[ 6741.128350] Tainted: [P]=PROPRIETARY_MODULE, [O]=OOT_MODULE
[ 6741.128355] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 4.2025.05-2 11/13/2025
[ 6741.128363] Workqueue: events drm_fb_helper_damage_work
[ 6741.128370] RIP: 0010:drm_atomic_helper_wait_for_vblanks.part.0+0x247/0x260
[ 6741.128377] Code: ff 84 c0 74 86 48 8d 75 a8 4c 89 f7 e8 c2 1e 3f ff 8b 45 98 85 c0 0f 85 f7 fe ff ff 48 8d 3d c0 6a 97 01 48 8b 53 20 8b 73 60 <67> 48 0f b9 3a e9 df fe ff ff e8 ba 83 51 00 66 2e 0f 1f 84 00 00
[ 6741.128391] RSP: 0018:ffffd51c4005fbd0 EFLAGS: 00010246
[ 6741.128396] RAX: 0000000000000000 RBX: ffff8e1c450c6bd0 RCX: 0000000000000000
[ 6741.128403] RDX: ffff8e1cbfbfb7e0 RSI: 0000000000000024 RDI: ffffffffa4623b50
[ 6741.128409] RBP: ffffd51c4005fc40 R08: 0000000000000000 R09: 0000000000000000
[ 6741.128415] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
[ 6741.128422] R13: 0000000000000000 R14: ffff8e1c5eea4c30 R15: ffff8e1c46589c80
[ 6741.128430] FS:  0000000000000000(0000) GS:ffff8e1d18b0f000(0000) knlGS:0000000000000000
[ 6741.128438] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 6741.128444] CR2: 00007ad53800f228 CR3: 000000001bffd000 CR4: 00000000000006f0
[ 6741.128452] Call Trace:
[ 6741.128456]  <TASK>
[ 6741.128461]  ? __pfx_autoremove_wake_function+0x10/0x10
[ 6741.128469]  drm_atomic_helper_commit_tail+0xa9/0xd0
[ 6741.128475]  commit_tail+0x11f/0x1b0
[ 6741.128480]  drm_atomic_helper_commit+0x132/0x160
[ 6741.128486]  drm_atomic_commit+0xad/0xf0
[ 6741.128492]  ? __pfx___drm_printfn_info+0x10/0x10
[ 6741.128498]  drm_atomic_helper_dirtyfb+0x1d5/0x2c0
[ 6741.128505]  drm_fbdev_shmem_helper_fb_dirty+0x4d/0xb0
[ 6741.128510]  drm_fb_helper_damage_work+0xf2/0x1a0
[ 6741.128516]  process_one_work+0x1a9/0x3c0
[ 6741.128522]  worker_thread+0x1b8/0x360
[ 6741.128527]  ? _raw_spin_unlock_irqrestore+0x11/0x60
[ 6741.128534]  ? __pfx_worker_thread+0x10/0x10
[ 6741.128539]  kthread+0xf7/0x130
[ 6741.128707]  ? __pfx_kthread+0x10/0x10
[ 6741.128859]  ret_from_fork+0x2dc/0x3a0
[ 6741.129002]  ? __pfx_kthread+0x10/0x10
[ 6741.129141]  ret_from_fork_asm+0x1a/0x30
[ 6741.129280]  </TASK>
[ 6741.129428] ---[ end trace 0000000000000000 ]---

I see currently no negative effects from this error; it appears only on the console and dmesg.

uname: Linux pbs 7.0.0-3-pve #1 SMP PREEMPT_DYNAMIC PMX 7.0.0-3 (2026-04-21T22:56Z) x86_64 GNU/Linux
Used system: PBS VM running on PVE
Mounted file systems:
  • root on ZFS
  • S3 Cache Disk on ext4
  • Backup storage (beside S3 off-site backup) on NFS
I see the same vblank wait timed out trace with our nested PVE labs. A nomodeset seems to eliminate these (or serial console would probably too).
 
Last edited:
Just updated PVE from 9.1.6 to 9.1.9 and to kernel 7.0.0.3 and got kernel panic.

I pinned the kernel to 6.17.13-2-pve for now and I'm back to business.

I'm happy to provide details to help to debug .Just tell me you need.

Hi. Apparently the same thing happened to me. Went from 9.1.6 to 9.1.9 today and got kernel panic :
Code:
unable to mount root fs on unknown-block(0,0)

Rebooted back to 6.17.13-7-pve and all seem ok.

journalctl does not seem to show much. But maybe I'm running the wrong command.
@fabian : Is there any way I can help before removing the 7.0.x kernel ?
 
I've just run a test to an Android phone running Tailscale & iperf3 -s (PingTools from Play store) & have received about 4Mbytes/s. This looks perfectly acceptable to me considering it is using 2 LTE connections (both my node uses an LTE router for Internet & that of the phone).
Test of iperf3-ver-tailscale between two LXCs on the same machine show great performance:
Code:
09:42 user@samba:~ > iperf3 -c ts.ip.same.machine1 -t 5
Connecting to host ts.ip.same.machine1, port 5201
[  5] local ts.ip.same.machine2 port 59294 connected to ts.ip.same.machine1 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec  1.38 GBytes  11.9 Gbits/sec    0   4.16 MBytes
[  5]   1.00-2.00   sec  1.40 GBytes  12.0 Gbits/sec    0   4.16 MBytes
[  5]   2.00-3.00   sec  1.39 GBytes  11.9 Gbits/sec    0   4.16 MBytes
[  5]   3.00-4.00   sec  1.39 GBytes  12.0 Gbits/sec    0   4.16 MBytes
[  5]   4.00-5.00   sec  1.38 GBytes  11.8 Gbits/sec    0   4.16 MBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-5.00   sec  6.94 GBytes  11.9 Gbits/sec    0            sender
[  5]   0.00-5.00   sec  6.94 GBytes  11.9 Gbits/sec                  receiver

iperf Done.
09:43 user@samba:~ > iperf3 -c ts.ip.same.machine1 -t 5 -R
Connecting to host ts.ip.same.machine1, port 5201
Reverse mode, remote host ts.ip.same.machine1 is sending
[  5] local ts.ip.same.machine2 port 59310 connected to ts.ip.same.machine1 port 5201
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-1.00   sec  1.39 GBytes  11.9 Gbits/sec
[  5]   1.00-2.00   sec  1.40 GBytes  12.0 Gbits/sec
[  5]   2.00-3.00   sec  1.41 GBytes  12.2 Gbits/sec
[  5]   3.00-4.00   sec  1.42 GBytes  12.2 Gbits/sec
[  5]   4.00-5.00   sec  1.41 GBytes  12.1 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-5.00   sec  7.03 GBytes  12.1 Gbits/sec    0            sender
[  5]   0.00-5.00   sec  7.03 GBytes  12.1 Gbits/sec                  receiver

iperf Done
.

However, the issue persists between two different machines on the same LAN (or beyond) - hugely asymmetric throughput:
(n.b. 10GbE to the main switch, but the receiving end here has only a USB 2.5 GbE dongle)

Code:
09:49 user@samba:~ > iperf3 -c ts.ip.same.lan1 -t 5
Connecting to host ts.ip.same.lan1, port 5201
[  5] local ts.ip.same.lan2 port 41570 connected to ts.ip.same.lan1 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec   512 KBytes  4.19 Mbits/sec   43   2.40 KBytes
[  5]   1.00-2.00   sec   768 KBytes  6.29 Mbits/sec   38   2.40 KBytes
[  5]   2.00-3.00   sec   128 KBytes  1.05 Mbits/sec   24   2.40 KBytes
[  5]   3.00-4.00   sec   384 KBytes  3.15 Mbits/sec   41   1.20 KBytes
[  5]   4.00-5.00   sec   384 KBytes  3.14 Mbits/sec   32   1.20 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-5.00   sec  2.12 MBytes  3.56 Mbits/sec  178            sender
[  5]   0.00-5.00   sec  2.12 MBytes  3.56 Mbits/sec                  receiver

iperf Done.
09:54 user@samba:~ > iperf3 -c ts.ip.same.lan1 -t 5 -R
Connecting to host ts.ip.same.lan1, port 5201
Reverse mode, remote host ts.ip.same.lan1 is sending
[  5] local ts.ip.same.lan2 port 46490 connected to ts.ip.same.lan1 port 5201
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-1.00   sec   117 MBytes   984 Mbits/sec
[  5]   1.00-2.00   sec   137 MBytes  1.15 Gbits/sec
[  5]   2.00-3.00   sec   123 MBytes  1.03 Gbits/sec
[  5]   3.00-4.00   sec   156 MBytes  1.31 Gbits/sec
[  5]   4.00-5.00   sec   151 MBytes  1.27 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-5.00   sec   688 MBytes  1.15 Gbits/sec    1            sender
[  5]   0.00-5.00   sec   685 MBytes  1.15 Gbits/sec                  receiver

iperf Done.

This leads me to conclude:

TCP over WireGuard/Tailscale works correctly as shown here - at 11.9 Gbps between LXCs on the same host (Linux pve-homeserver25 7.0.2-2-pve), but collapses to ~4 Mbps when packets egress through the physical Mellanox ConnectX-4 Lx NIC. The regression is therefore isolated to the mlx5 transmit path handling of WireGuard UDP packets in kernel 7.x.

Not sure how to take diagnosis any further and would love to get some Proxmox input on this issue...


Code:
02:00.0 Ethernet controller: Mellanox Technologies MT27710 Family [ConnectX-4 Lx]
        Subsystem: Mellanox Technologies Stand-up ConnectX-4 Lx EN, 25GbE dual-port SFP28, PCIe3.0 x8, MCX4121A-ACAT
        Kernel driver in use: mlx5_core
        Kernel modules: mlx5_core
02:00.1 Ethernet controller: Mellanox Technologies MT27710 Family [ConnectX-4 Lx]
        Subsystem: Mellanox Technologies Stand-up ConnectX-4 Lx EN, 25GbE dual-port SFP28, PCIe3.0 x8, MCX4121A-ACAT
        Kernel driver in use: mlx5_core
        Kernel modules: mlx5_core

Querying Mellanox devices firmware ...

Device #1:
----------

  Device Type:      ConnectX4LX
  Part Number:      MCX4121A-ACA_Ax
  Description:      ConnectX-4 Lx EN network interface card; 25GbE dual-port SFP28; PCIe3.0 x8; ROHS R6
  PSID:             MT_2420110034
  PCI Device Name:  /dev/mst/mt4117_pciconf0
  Base MAC:         0c42a12d0cd2
  Versions:         Current        Available
     FW             14.32.1912     14.32.1010
     PXE            3.6.0502       3.6.0502
     UEFI           14.25.0017     14.25.0017

  Status:           Up to date


...of course worth remembering others, with completely different NICs, are also seeing this issue (I know of at least 4 reports between here and this reddit thread: https://www.reddit.com/r/Proxmox/comments/1t7xs12/proxmox_tailscale_lxc_regression_wkernel_7/ )

I am yet to see any non Proxmox reports that sound similar though.
 
Last edited:
This leads me to conclude:

TCP over WireGuard/Tailscale works correctly as shown here - at 11.9 Gbps between LXCs on the same host (Linux pve-homeserver25 7.0.2-2-pve), but collapses to ~4 Mbps when packets egress through the physical Mellanox ConnectX-4 Lx NIC. The regression is therefore isolated to the mlx5 transmit path handling of WireGuard UDP packets in kernel 7.x.

Not sure how to take diagnosis any further and would love to get some Proxmox input on this issue...
I'm not sure your conclusion is totally correct - since on the incoming iperf3 -R you don't show that regression, even though it is to be assumed that those packets are also coming in on the Mellanox ConnectX-4 Lx NIC.

As a test - you could try using an iperf3 test from the host itself or another LXC/VM (non user@samba) to eliminate other variables (such as samba etc.).

Good luck.
 
Already done, the LXC is seemingly irrelevant - the samba one is Debian13 with nothing but samba on it, but the issue also occurs with a Ubuntu 24.04 LXC with Jellyfin on it. I don't have (and don't really want) Tailscale installed on the PVE host itself so can't really test from there. (I have of course also tested iperf3 without Tailscale, and I then do get the full expected speeds).

The conclusion was an issue with the transmit path, not the receive, as the results demonstrate I believe.
 
Hi. Apparently the same thing happened to me. Went from 9.1.6 to 9.1.9 today and got kernel panic :
Code:
unable to mount root fs on unknown-block(0,0)

Rebooted back to 6.17.13-7-pve and all seem ok.

journalctl does not seem to show much. But maybe I'm running the wrong command.
@fabian : Is there any way I can help before removing the 7.0.x kernel ?
yes, please open a new thread (feel free to mention @ me) and include a journal of the working kernel and details about your setup and hardware, in particular where your rootfs is stored ;)
 
  • Like
Reactions: gargakumar
Is this opt in? When I updated to PVE 9.1, This kernel was installed by default.
I also no 6.17 mentioned in the thread, I have 6.8 which was already there on proxmox 8, and this 7.0 was installed during the apt dist-upgrade process.
 
Last edited:
After upgrade to latest Proxmox 9 and its Linux kernel 7.0.x , Windows 11 VM will not boot and will crash with automatic startup repair failing.
When I pin kernel to 6.17.13-8-pve via `proxmox-boot-tool kernel pin 6.17.13-8-pve --next-boot` , it works perfectly fine.

There's no GPU or PCIe passthrough that I am doing.
I am using Intel N150 host and using host type for the VM.
 
Last edited:
After upgrade to latest Proxmox 9 and its Linux kernel 7.0.x , Windows 11 VM will not boot and will crash with automatic startup repair failing.
When I pin kernel to 6.17.13-8-pve via `proxmox-boot-tool kernel pin 6.17.13-8-pve --next-boot` , it works perfectly fine.

There's no GPU or PCIe passthrough that I am doing.

See the "Known issues", whether it applies to your case (Windows 11 might also be affected?):
 
  • Like
Reactions: anutrix
Interesting update - especially the decision to align the default kernel with the upcoming Ubuntu 26.04 base. The jump to the 7.0 kernel should bring noticeable improvements for newer AMD EPYC and Intel Xeon platforms, along with better NVMe and networking support.
 
Across all my 6-7 Zen3/4/5 Servers especially, either zfs 2.4.2 or since kernel 7.0.2-2/4...

The perfomance increased a lot.
I mean i can feel it, so its more as 10%.
Even on old intel Servers i can feel some performance increase, not as huge as on Zen4 but there is some too.

And the new novnc/terminal console is a huge improvement either!!!

Thanks a lot!
This is definitively the biggest or only performance increase i see since any kernel or update since years.

However its the first time i have an issue either with some weird PBS Storage timeouts or not available, thats why i camed across the forum to read if someone else is affected.
But so far i think its probably an issue on my side.
So for now i would say, its amazing. Its one of the best updates, maybe the best since years.

EDIT: My issue is SMB related to Hetzner CIFS-Storage beeing unreliable, so it has nothing todo with Proxmox. Everything is perfect like always.

Cheers
 
Last edited:
See the "Known issues", whether it applies to your case (Windows 11 might also be affected?):
It seems workaround will be merged in proxmox soon: https://lore.proxmox.com/all/20260515155810.229819-1-f.ebner@proxmox.com/T/
 
We recently uploaded the 7.0 (rc6) kernel to our repositories. The current default kernel for the Proxmox VE 9 series is still 6.17, but 7.0 is now an option.

We plan to use the 7.0 kernel as the new default for the upcoming Proxmox VE 9.2 and Proxmox Backup Server 4.2 releases planned later in Q2.
This follows our tradition of upgrading the Proxmox VE kernel to match the current Ubuntu version until we reach an Ubuntu LTS release, at which point we will only provide newer kernels as an opt-in option. The 7.0 kernel is based on the upcoming Ubuntu 26.04 Resolute release.

We have run this kernel on some of our test setups over the last few days without encountering any significant issues. However, for production setups, we strongly recommend either using the 6.17-based kernel or testing on similar hardware/setups before upgrading any production nodes to 7.0.

How to install:
  1. Ensure that you either have the pve-test repository (or pbs-test for Proxmox Backup Server) or the respective no-subscription repositories set up correctly.
    You can do so via CLI text editor or using the web UI under Node -> Repositories.
  2. Open a shell as root, e.g., through SSH or using the integrated shell on the web UI.
  3. apt update
  4. apt install proxmox-kernel-7.0
  5. reboot
Future updates to the 7.0 kernel will now be installed automatically when upgrading a node.

Please note:
  • The current 6.17 kernel is still supported and will stay the default kernel until further notice.
  • There were many changes for improved hardware support and performance improvements across the board.
    For a good overview of prominent changes, we recommend checking out the kernel-newbies site for 6.18, 6.19, and 7.0.
  • The kernel is also available on the test and no-subscription repositories of Proxmox Backup Server and Proxmox Mail Gateway, and in the test repo of Proxmox Datacenter Manager.
  • The new 7.0-based opt-in kernel will not be made available for the previous Proxmox VE 8 release series.
  • If you're unsure, we recommend continuing to use the 6.17-based kernel for now.

Feedback about how the new kernel performs in any of your setups is welcome!
Please provide basic details like CPU model, storage types used, ZFS as root file system, and the like, for both positive feedback or if you ran into issues where using the opt-in 7.0 kernel seems to be the likely cause.

Known Issues:
None at the time of writing.

Edit 2026-04-20: the kernel is now also available on the no-subscription repository.
Edit 2026-04-29: the kernel is now the default on no-subscription, and will be the default for Proxmox Backup Server. The remaining rollout for Proxmox VE will be happen over the next weeks.
May I inquire how the additional attack vector of RUST in the kernel is being triaged?
Is it of no concern on the one hand or actively being stripped out on the other?

Or anything in between of course.
 
Kernel 7.0 is unusable for me.
Code:
May 17 20:02:05 px1 pveproxy[3689]: worker 406420 started
May 17 20:02:30 px1 QEMU[3776]: kvm: vfio_container_dma_map(0x5bc5a7745dd0, 0xe0cd0000, 0x1000, 0x717b85a6e000) = -28 (No space left on device)
May 17 20:02:30 px1 QEMU[3776]: kvm: vfio_container_dma_map(0x5bc5a7745dd0, 0xe0ccf000, 0x1000, 0x717a4960b000) = -28 (No space left on device)
May 17 20:02:30 px1 QEMU[3776]: kvm: vfio_container_dma_map(0x5bc5a7745dd0, 0xe0cce000, 0x1000, 0x717a4960a000) = -28 (No space left on device)
May 17 20:02:30 px1 kernel: DMAR: DRHD: handling fault status reg 2
May 17 20:02:30 px1 kernel: DMAR: [DMA Read NO_PASID] Request device [04:10.0] fault addr 0xe0ca5000 [fault reason 0x06] PTE Read access is not set
May 17 20:02:30 px1 kernel: DMAR: DRHD: handling fault status reg 102
May 17 20:02:30 px1 kernel: DMAR: [DMA Read NO_PASID] Request device [04:10.0] fault addr 0xe0ca3000 [fault reason 0x06] PTE Read access is not set
May 17 20:02:30 px1 kernel: DMAR: DRHD: handling fault status reg 202
May 17 20:02:30 px1 kernel: DMAR: [DMA Read NO_PASID] Request device [04:10.0] fault addr 0xe0ca1000 [fault reason 0x06] PTE Read access is not set
May 17 20:02:30 px1 kernel: DMAR: DRHD: handling fault status reg 302
May 17 20:02:30 px1 QEMU[3776]: kvm: vfio_container_dma_map(0x5bc5a7745dd0, 0xe0ccd000, 0x1000, 0x717b87523000) = -28 (No space left on device)
...
May 17 20:02:33 px1 QEMU[3776]: kvm: vfio_container_dma_map(0x5bc5a7745dd0, 0xe0c70000, 0x1000, 0x717a1d328000) = -28 (No space left on device)
May 17 20:02:34 px1 kernel: vfio-pci 0000:04:10.0: timed out waiting for pending transaction; performing function level reset anyway
May 17 20:02:34 px1 kernel: ixgbe 0000:04:00.0 enp4s0f0: 7 Spoofed packets detected
May 17 20:02:46 px1 pvestatd[3650]: storage 'NAS_VM_Backups' is not online
May 17 20:02:47 px1 pvestatd[3650]: status update time (10.498 seconds)
May 17 20:02:57 px1 QEMU[3776]: kvm: vfio_container_dma_map(0x5bc5a7745dd0, 0xe0c6f000, 0x1000, 0x717a1d32a000) = -28 (No space left on device)
As in previous post (still waiting for a moderator) 7.0.2-4 also has issues with SR-IOV of Intel X540-AT4.
Going back to 6.17
 
Last edited: