Opt-in Linux Kernel 5.15 for Proxmox VE 7.x available

I tried 5.15.30, and unfortunately for me I still receive the endless "Cannot reserve BAR memory," so it appears to still be broken on my end.

I have tried the

Code:
video=simplefb:off

in additional or as a replacement for the older fb code in my grub_cmdline_linux_default and it doesn't seem to fix the issue.
 
Last edited:
Just for information:
On kernel 5.13 (5.13.19-5-pve) I always had these messages during boot (partial log):
Mar 09 09:15:48 pve kernel: clocksource: Switched to clocksource tsc-early
Mar 09 09:15:48 pve kernel: clocksource: Switched to clocksource tsc
Mar 09 09:15:49 pve kernel: clocksource: timekeeping watchdog on CPU21: hpet read-back delay of 812044ns, attempt 4, marking unstable
Mar 09 09:15:49 pve kernel: tsc: Marking TSC unstable due to clocksource watchdog
Mar 09 09:15:49 pve kernel: TSC found unstable after boot, most likely due to broken BIOS. Use 'tsc=unstable'.
Mar 09 09:15:49 pve kernel: sched_clock: Marking unstable (13709804983, 37972020)<-(13830452934, -82675923)
Mar 09 09:15:49 pve kernel: clocksource: Checking clocksource tsc synchronization from CPU 14.
Mar 09 09:15:49 pve kernel: clocksource: Switched to clocksource hpet
Mar 09 09:16:05 pve kernel: kvm: SMP vm created on host with unstable TSC; guest TSC will not be reliable
The last line had me a bit worried.
As I found out that in kernel 5.15 some clocksource related commits have been implemented, I decided to install kernel 5.15.

Now after almost a month with 5.15 I can say that clocksource stays on tsc and no more "guest TSC will not be reliable".
Current clocksource messages:
Apr 10 11:17:38 pve kernel: clocksource: Switched to clocksource tsc-early
Apr 10 11:17:38 pve kernel: clocksource: Switched to clocksource tsc
Apr 10 11:17:39 pve kernel: clocksource: timekeeping watchdog on CPU18: hpet wd-wd read-back delay of 922533ns
Apr 10 11:17:39 pve kernel: clocksource: wd-tsc-wd read-back delay of 659022ns, clock-skew test skipped!
 
  • Like
Reactions: leesteken
So just to shed some light on this AMD nested virtualization issues:

TLDR: pve5.15 - HyperV and Sandbox tested on Windows VM WORKING -- BUT SAME VM, GPU passthrough and i get the "unimplemented wrmsr error"

Configuration:
- amd ryzen 5950x
- asus x570 wifi gaming ii

Started with pve5.13, I have a primary windows VM with GPU passthrough. I wanted to run hyperV (WSL and sandbox) on it.
- I started with the CPU=default (KVM64), but cant enable hyperV.
- changed to CPU=host, got into windows, enabled CPU. restarted, hung on boot... no luck
- upgraded pve to 5.15
- Got the unimplemented wrmsr issue.

NEW windows VM saga:
- created NEW windows VM (no gpu passthrough), set CPU to host, and enabled hyperV and tested with sandbox... it works... note, i was using the pve console to interact with this new windows VM
- shutdown new windows VM, added my GPU... and ran into the unimplemented wrmsr issue.
- removed GPU, new windows VM booted and sandbox was working

Back to the old windows VM:
- removed the GPU, (hyperV enables, CPU=host still) -> using console, windows vm boots and sandbox works
- added GPU back... windows doesnt boot... black screen with cursor
- revert back to pve5.13 (same setup as original), GPU windows works, but no hyperV/Sandbox

Overall, I think that what is causing this sort of issue is actually gpu passthrough. Note GPU passthrough works flawlessly without hyperV/nested virtualization.

Hopefully someone with more knowledge can put two and two together and figure this out for us, more than happy to provide whatever info you need.
 
I have a strange problem: when I upgrade to 5.15. my iSCSi Target cant be reached anymore. I do not know why but It cannot be reached.
Is this a problem with the Hardware Configuration I use?

2x Intel Nuc NUC10i7FNK2 (32 GB RAM)
1x Intel Nuc NUC7i5DNHE (16 GB RAM)

iSCSI Target: QNAP NAS with 1x 2 TB LUN.
 
So just to shed some light on this AMD nested virtualization issues:

TLDR: pve5.15 - HyperV and Sandbox tested on Windows VM WORKING -- BUT SAME VM, GPU passthrough and i get the "unimplemented wrmsr error"

Configuration:
- amd ryzen 5950x
- asus x570 wifi gaming ii

Started with pve5.13, I have a primary windows VM with GPU passthrough. I wanted to run hyperV (WSL and sandbox) on it.
- I started with the CPU=default (KVM64), but cant enable hyperV.
- changed to CPU=host, got into windows, enabled CPU. restarted, hung on boot... no luck
- upgraded pve to 5.15
- Got the unimplemented wrmsr issue.

NEW windows VM saga:
- created NEW windows VM (no gpu passthrough), set CPU to host, and enabled hyperV and tested with sandbox... it works... note, i was using the pve console to interact with this new windows VM
- shutdown new windows VM, added my GPU... and ran into the unimplemented wrmsr issue.
- removed GPU, new windows VM booted and sandbox was working

Back to the old windows VM:
- removed the GPU, (hyperV enables, CPU=host still) -> using console, windows vm boots and sandbox works
- added GPU back... windows doesnt boot... black screen with cursor
- revert back to pve5.13 (same setup as original), GPU windows works, but no hyperV/Sandbox

Overall, I think that what is causing this sort of issue is actually gpu passthrough. Note GPU passthrough works flawlessly without hyperV/nested virtualization.

Hopefully someone with more knowledge can put two and two together and figure this out for us, more than happy to provide whatever info you need.
I have the same problem as you
 
I'm still not having any luck with GPU passthrough, could anyone who has it working offer their GRUB_CMDLINE_LINUX_DEFAULT line? I believe I have tried every combination I can think of, but I am sure I am missing or mis-thinking something.

Thanks!
 
  • Like
Reactions: Ming-Yuan Yu
I'm still not having any luck with GPU passthrough, could anyone who has it working offer their GRUB_CMDLINE_LINUX_DEFAULT line? I believe I have tried every combination I can think of, but I am sure I am missing or mis-thinking something.

Thanks!
Sorry, but it is not that simple; I have no changes to GRUB_CMDLINE_LINUX_DEFAULT and passthrough two GPUs. Please start a thread on the forum that tell what motherboard, CPU and all GPUs you are using and what problems you run into.
 
Last edited:
So just to shed some light on this AMD nested virtualization issues:

TLDR: pve5.15 - HyperV and Sandbox tested on Windows VM WORKING -- BUT SAME VM, GPU passthrough and i get the "unimplemented wrmsr error"

Configuration:
- amd ryzen 5950x
- asus x570 wifi gaming ii

Started with pve5.13, I have a primary windows VM with GPU passthrough. I wanted to run hyperV (WSL and sandbox) on it.
- I started with the CPU=default (KVM64), but cant enable hyperV.
- changed to CPU=host, got into windows, enabled CPU. restarted, hung on boot... no luck
- upgraded pve to 5.15
- Got the unimplemented wrmsr issue.

NEW windows VM saga:
- created NEW windows VM (no gpu passthrough), set CPU to host, and enabled hyperV and tested with sandbox... it works... note, i was using the pve console to interact with this new windows VM
- shutdown new windows VM, added my GPU... and ran into the unimplemented wrmsr issue.
- removed GPU, new windows VM booted and sandbox was working

Back to the old windows VM:
- removed the GPU, (hyperV enables, CPU=host still) -> using console, windows vm boots and sandbox works
- added GPU back... windows doesnt boot... black screen with cursor
- revert back to pve5.13 (same setup as original), GPU windows works, but no hyperV/Sandbox

Overall, I think that what is causing this sort of issue is actually gpu passthrough. Note GPU passthrough works flawlessly without hyperV/nested virtualization.

Hopefully someone with more knowledge can put two and two together and figure this out for us, more than happy to provide whatever info you need.
I have the same problem as you
did you both enable the 'ignore_msrs=1' for the kvm module?
 
Working good on my system with AMD 3900X (ondemand governor) on X570 system with IOMMU and SR-IOV.
Using last i40e and iavf drivers
Command line: BOOT_IMAGE=/boot/vmlinuz-5.15.30-2-pve root=/dev/mapper/pve-root ro quiet amd_iommu=on iommu=pt kvm_amd.npt=1 kvm_amd.avic=1 nmi_watchdog=0 mitigations=off

Have a good day.
 
Last edited:
  • Like
Reactions: t.lamprecht
Last edited:
Hi,
I had an issue witj the SATA controller. The message “failed command: WRITE FPDMA QUEUED” was showing several times during the day, in fact many times every hour... With this new Kernel the message showed a couple of times but then just dissapeared. Looks like the system "adapts" to the SATA behaviour and fine tunes itself. The fact is that since 2 days I don't see the message again.
 
Hi,
I had an issue witj the SATA controller. The message “failed command: WRITE FPDMA QUEUED” was showing several times during the day, in fact many times every hour... With this new Kernel the message showed a couple of times but then just dissapeared. Looks like the system "adapts" to the SATA behaviour and fine tunes itself. The fact is that since 2 days I don't see the message again.
Check your logs to see if the SATA driver is slowing the connection as a response to this error. This indicates a problem with the SATA port or the cable (or just a poor connection that can be fixed by unpluggin and replugging several times) and I suggest reconnecting to another port and/or replacing the cable.
 
Check your logs to see if the SATA driver is slowing the connection as a response to this error. This indicates a problem with the SATA port or the cable (or just a poor connection that can be fixed by unpluggin and replugging several times) and I suggest reconnecting to another port and/or replacing the cable.
Hi @leesteken,
Thanks for your reply.
In fact, speed went down from 6.0 Gbps to 3.0 Gbps. Regarding cabling, I'm using an internal NVME SSD so no cabling. Took out the SSD and reconnected but behaviour is the same.
I've read about new versions of 5.15 kernel. Will check wich one I have and upgrade if necessary.
 
I didn't but will try. Thx.
Hi @Jrant,
Done, however, the system then keeps sending the message “failed command: WRITE FPDMA QUEUED”. The link is not reseted to 3.0 Gbps though but if fact is reseted to 6.0 Gbps.
Do you know if this error is "real" or a bug? My concern is that if it is real then could bring some other issues.
 
Kernel version 5.15.27-1-pve introduced a bug for network cards on the Atlantic chipset.

Code:
Mar 19 16:31:12 pve-1 kernel: ================================================================================
Mar 19 16:31:12 pve-1 kernel: UBSAN: array-index-out-of-bounds in drivers/net/ethernet/aquantia/atlantic/aq_nic.c:484:48
Mar 19 16:31:12 pve-1 kernel: index 8 is out of range for type 'aq_vec_s *[8]'
...
[/QUOTE]
I have similar issue, and it seems that there is a patch already: https://patchwork.kernel.org/projec...08022204.16815-1-kai.heng.feng@canonical.com/

Would be nice to apply it in next release.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!