DirtyPipe (CVE-2022-0847) fix for Proxmox VE

psuter

Member
Jan 18, 2011
14
3
23
Switzerland
Dear Proxmox Team

as of today a new security issue was published which also affects the kernels available for proxmox 7. it is dubbed dirtypipe and seems to allow privilege escalations on affected systems.

all details can be found here:
https://dirtypipe.cm4all.com/

i checked on my proxmox 7 installation and found that for now, the latest available pve-kernel version seems to be 5.15.7 but unless the fix was back-ported, we would need at least 5.15.25. Kernels since 5.8 are affected, so that seems to cover all currently available proxmox kernels on PVE 7

please keep us informed as to when an upgrade is available that fixes this bug.

kind regards
Pascal
 
  • Like
Reactions: godsavethequ33n
I see the commit here for version: pve-manager/7.1-10/6ddebafe (running kernel: 5.13.19-3-pve)
https://git.proxmox.com/?p=pve-kernel.git;a=commit;h=95f0dbdf95bd74024ca6c266652448c08ad8e4b2
That commit is packaged and currently on our internal testing repositories - once we are confident it works as expected it will be made available in pvetest (and in pve-no-subscription a bit later)

I'll try to notify you here once the fixed version is available

I hope this helps!
 
The respective 5.13 based kernel package are available on the pvetest repository since of 15:50 UTC+01 and on the pve-no-subscription repository since ~ 18:00 UTC+01.

The fix is included with pve-kernel-5.13.19-5-pve in version 5.13.19-11; note that a newer package was already bumped due to an unrelated regression w.r.t. to some PCI-E devices (to allow fast tracking this without to much regression potential).
 
i checked on my proxmox 7 installation and found that for now, the latest available pve-kernel version seems to be 5.15.7 but unless the fix was back-ported, we would need at least 5.15.25.
Our opt-in 5.15 based kernel latest versions derives from the 5.15.19 upstream stable release (numerical vs. lexical sorting would suggest the .7 as newest one). Besides that, one cannot always correlate Linux upstream stable release tags with the one we're using, and often we rather avoid making big jumps in releases (bigger regression potential) and just backport the respective fix ourselves. Here, with this issue the fix was rather trivial (see for yourself) and a reproducer to test if it works was also available to test.

The fix for this issue in the 5.15 kernel got packaged with pve-kernel-5.15.19-2-pve in version 5.15.19-3, currently available on pve-no-subscription. We'll evaluate how soon we can move both, 5.13 and 5.15 based kernel packages including this fix, to the enterprise repositories while minimizing regression potential.
 
  • Like
Reactions: godsavethequ33n
any ideas when the migrations will be released in pve-enterprise ?

s.th. like few hours or days?
Currently planning to move it there tomorrow morning, at least if no regressions comes up.

If it's critical for you, you could temporarily add the no-subscription repo and install the specific newer kernel package only.
 
  • Like
Reactions: godsavethequ33n
I just installed on two servers the latest 5.13.19-5-pve Kernel from "no subscription" - all packages up2date - and got the same behaviour on both systems. KVM machines do not start ("Display was not startet yet") - and the following output for each Core in syslog:


Code:
Mar  7 22:36:40 promo8 kernel: [  680.493164] ------------[ cut here ]------------
Mar  7 22:36:41 promo8 kernel: [  680.494131] WARNING: CPU: 18 PID: 38490 at arch/x86/kvm/vmx/vmx.c:6336 vmx_sync_pir_to_irr+0xad/0xd0 [kvm_intel]
Mar  7 22:36:41 promo8 kernel: [  680.495070] Modules linked in: tcp_diag inet_diag nfsv3 nfs_acl nfs lockd grace fscache netfs ebtable_filter ebtables ip_set ip6table_raw iptable_raw ip6table_filter ip6_tables iptable_filter bpfilter sctp ip6_udp_tunnel udp_tunnel nf_tables 8021q garp mrp bonding tls softdog nfnetlink_log nfnetlink intel_rapl_msr intel_rapl_common ipmi_ssif sb_edac x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel kvm irqbypass crct10dif_pclmul ghash_clmulni_intel aesni_intel crypto_simd cryptd rapl intel_cstate pcspkr efi_pstore mgag200 drm_kms_helper cec rc_core i2c_algo_bit fb_sys_fops syscopyarea input_leds joydev ioatdma sysfillrect sysimgblt hpilo dca acpi_ipmi ipmi_si ipmi_devintf acpi_tad ipmi_msghandler acpi_power_meter mac_hid vhost_net vhost vhost_iotlb tap ib_iser rdma_cm iw_cm ib_cm ib_core iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi drm sunrpc ip_tables x_tables autofs4 zfs(PO) zunicode(PO) zzstd(O) zlua(O) zavl(PO) icp(PO) zcommon(PO) znvpair(PO) spl(O) btrfs
Mar  7 22:36:41 promo8 kernel: [  680.495113]  blake2b_generic xor zstd_compress raid6_pq hid_generic usbkbd usbmouse usbhid ses hid enclosure xhci_pci crc32_pclmul xhci_pci_renesas uhci_hcd i2c_i801 ehci_pci i2c_smbus lpc_ich nvme xhci_hcd ehci_hcd bnx2x nvme_core hpsa mdio libcrc32c scsi_transport_sas wmi
Mar  7 22:36:41 promo8 kernel: [  680.505034] CPU: 18 PID: 38490 Comm: kvm Tainted: P        W  O      5.13.19-5-pve #1
Mar  7 22:36:41 promo8 kernel: [  680.506057] Hardware name: HP ProLiant DL560 Gen9/ProLiant DL560 Gen9, BIOS P85 10/16/2020
Mar  7 22:36:41 promo8 kernel: [  680.507105] RIP: 0010:vmx_sync_pir_to_irr+0xad/0xd0 [kvm_intel]
Mar  7 22:36:41 promo8 kernel: [  680.508198] Code: 45 ec 41 89 c0 8b 83 00 03 00 00 83 e0 20 85 c0 74 d7 48 8b 45 f0 65 48 2b 04 25 28 00 00 00 75 24 48 8b 5d f8 44 89 c0 c9 c3 <0f> 0b e9 79 ff ff ff f0 80 4b 39 40 8b 83 00 03 00 00 44 8b 45 ec
Mar  7 22:36:41 promo8 kernel: [  680.510161] RSP: 0018:ffffba5602943d08 EFLAGS: 00010046
Mar  7 22:36:41 promo8 kernel: [  680.511183] RAX: 0000000000000000 RBX: ffff94e69a31a640 RCX: ffff94e698891000
Mar  7 22:36:41 promo8 kernel: [  680.512133] RDX: ffff94e69b0b7000 RSI: 0000000000000000 RDI: ffff94e69a31a640
Mar  7 22:36:41 promo8 kernel: [  680.513142] RBP: ffffba5602943d20 R08: 0000000000000000 R09: 0000000000000000
Mar  7 22:36:41 promo8 kernel: [  680.514204] R10: 0000000000000000 R11: 0000000000000000 R12: ffff94e69a31a640
Mar  7 22:36:41 promo8 kernel: [  680.515198] R13: 0000000000000000 R14: ffffba56028d43e0 R15: ffff94e69a31a678
Mar  7 22:36:41 promo8 kernel: [  680.516270] FS:  00007f71a8df7700(0000) GS:ffff950ddec00000(0000) knlGS:0000000000000000
Mar  7 22:36:41 promo8 kernel: [  680.517304] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
Mar  7 22:36:41 promo8 kernel: [  680.518211] CR2: 0000000000000000 CR3: 000000289be02005 CR4: 00000000003726e0
Mar  7 22:36:41 promo8 kernel: [  680.519286] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
Mar  7 22:36:41 promo8 kernel: [  680.520425] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Mar  7 22:36:41 promo8 kernel: [  680.521478] Call Trace:
Mar  7 22:36:41 promo8 kernel: [  680.522482]  <TASK>
Mar  7 22:36:41 promo8 kernel: [  680.523476]  kvm_arch_vcpu_ioctl_run+0x482/0x1750 [kvm]
Mar  7 22:36:41 promo8 kernel: [  680.524599]  kvm_vcpu_ioctl+0x247/0x5f0 [kvm]
Mar  7 22:36:41 promo8 kernel: [  680.525740]  ? kvm_arch_vcpu_ioctl_run+0x600/0x1750 [kvm]
Mar  7 22:36:41 promo8 kernel: [  680.526835]  ? __fget_files+0x86/0xc0
Mar  7 22:36:41 promo8 kernel: [  680.527961]  __x64_sys_ioctl+0x91/0xc0
Mar  7 22:36:41 promo8 kernel: [  680.528943]  do_syscall_64+0x61/0xb0
Mar  7 22:36:41 promo8 kernel: [  680.530039]  ? exit_to_user_mode_prepare+0x37/0x1b0
Mar  7 22:36:41 promo8 kernel: [  680.530936]  ? syscall_exit_to_user_mode+0x27/0x50
Mar  7 22:36:41 promo8 kernel: [  680.531849]  ? do_syscall_64+0x6e/0xb0
Mar  7 22:36:41 promo8 kernel: [  680.532887]  ? exc_page_fault+0x8f/0x170
Mar  7 22:36:41 promo8 kernel: [  680.533825]  ? asm_exc_page_fault+0x8/0x30
Mar  7 22:36:41 promo8 kernel: [  680.534682]  entry_SYSCALL_64_after_hwframe+0x44/0xae
Mar  7 22:36:41 promo8 kernel: [  680.535619] RIP: 0033:0x7f71b4c56cc7
Mar  7 22:36:41 promo8 kernel: [  680.536581] Code: 00 00 00 48 8b 05 c9 91 0c 00 64 c7 00 26 00 00 00 48 c7 c0 ff ff ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 b8 10 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d 99 91 0c 00 f7 d8 64 89 01 48
Mar  7 22:36:41 promo8 kernel: [  680.538657] RSP: 002b:00007f71a8df2248 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
Mar  7 22:36:41 promo8 kernel: [  680.539703] RAX: ffffffffffffffda RBX: 000000000000ae80 RCX: 00007f71b4c56cc7
Mar  7 22:36:41 promo8 kernel: [  680.540750] RDX: 0000000000000000 RSI: 000000000000ae80 RDI: 000000000000001a
Mar  7 22:36:41 promo8 kernel: [  680.541765] RBP: 000056370b64f8a0 R08: 00005637093e3d38 R09: 00000000000000ff
Mar  7 22:36:41 promo8 kernel: [  680.543542] R10: 0000000000000001 R11: 0000000000000246 R12: 0000000000000000
Mar  7 22:36:41 promo8 kernel: [  680.544516] R13: 000056370983f1c0 R14: 0000000000000001 R15: 0000000000000000
Mar  7 22:36:41 promo8 kernel: [  680.545442]  </TASK>
Mar  7 22:36:41 promo8 kernel: [  680.546319] ---[ end trace 4330ee028429339c ]---
Mar  7 22:36:41 promo8 kernel: [  680.547340] ------------[ cut here ]------------

After booting again with 5.13.19-4-pve everything is fine as before.
 
  • Like
Reactions: godsavethequ33n
I confirm the bug. In my case all CTs and Debian VMs start and work fine but Windows VMs seem to be stuck in a loop. I don't even get a console for those VM, however they appear as started in qm list. And syslog keep filling endless with errors like the one below

Code:
Mar 08 01:09:45 ns3192824 kernel: ------------[ cut here ]------------
Mar 08 01:09:45 ns3192824 kernel: WARNING: CPU: 8 PID: 2364 at arch/x86/kvm/vmx/vmx.c:6336 vmx_sync_pir_to_irr+0xad/0xd0 [kvm_intel]
Mar 08 01:09:45 ns3192824 kernel: Modules linked in: nft_compat nft_counter nf_tables rpcsec_gss_krb5 auth_rpcgss nfsv4 nfs lockd grace fscache netfs veth ebtable_filter ebtables ip6table_raw ip6t_REJECT nf_reject_ipv6 ip6table_filter ip6_tables iptable_raw xt_mac ipt_REJECT nf_reject_ipv4 xt_mark xt_set xt_physdev xt_addrtype xt_comment xt_tcpudp xt_multiport xt_conntrack ip_set_hash_net ip_set iptable_filter iptable_nat xt_MASQUERADE nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 bpfilter softdog nfnetlink_log nfnetlink intel_rapl_msr intel_rapl_common isst_if_common ipmi_ssif skx_edac nfit x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel kvm irqbypass drm_vram_helper drm_ttm_helper ttm crct10dif_pclmul drm_kms_helper cec ghash_clmulni_intel rc_core i2c_algo_bit aesni_intel fb_sys_fops ioatdma syscopyarea crypto_simd sysfillrect cryptd sysimgblt joydev input_leds efi_pstore dca rapl mei_me intel_cstate mei intel_pch_thermal acpi_ipmi acpi_pad ipmi_si mac_hid ipmi_devintf ipmi_msghandler
Mar 08 01:09:45 ns3192824 kernel:  acpi_power_meter zfs(PO) zunicode(PO) zzstd(O) zlua(O) zavl(PO) icp(PO) zcommon(PO) znvpair(PO) spl(O) vhost_net vhost vhost_iotlb tap ib_iser rdma_cm iw_cm ib_cm ib_core iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi drm sunrpc ip_tables x_tables autofs4 raid10 raid456 async_raid6_recov async_memcpy async_pq async_xor async_tx xor hid_generic usbmouse usbkbd usbhid hid raid6_pq libcrc32c raid1 raid0 multipath linear xhci_pci xhci_pci_renesas crc32_pclmul i40e i2c_i801 i2c_smbus ahci xhci_hcd libahci wmi
Mar 08 01:09:45 ns3192824 kernel: CPU: 8 PID: 2364 Comm: kvm Tainted: P        W  O      5.13.19-5-pve #1
Mar 08 01:09:45 ns3192824 kernel: Hardware name: Supermicro Super Server/X11SDV-8C-TLN2F, BIOS 1.3a 07/13/2020
Mar 08 01:09:45 ns3192824 kernel: RIP: 0010:vmx_sync_pir_to_irr+0xad/0xd0 [kvm_intel]
Mar 08 01:09:45 ns3192824 kernel: Code: 45 ec 41 89 c0 8b 83 00 03 00 00 83 e0 20 85 c0 74 d7 48 8b 45 f0 65 48 2b 04 25 28 00 00 00 75 24 48 8b 5d f8 44 89 c0 c9 c3 <0f> 0b e9 79 ff ff ff f0 80 4b 39 40 8b 83 00 03 00 00 44 8b 45 ec
Mar 08 01:09:45 ns3192824 kernel: RSP: 0018:ffff95ddc13e7cf8 EFLAGS: 00010046
Mar 08 01:09:45 ns3192824 kernel: RAX: 0000000000000000 RBX: ffff8a9a371d4c80 RCX: 00007f80ad767700
Mar 08 01:09:45 ns3192824 kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: ffff8a9a371d4c80
Mar 08 01:09:45 ns3192824 kernel: RBP: ffff95ddc13e7d10 R08: ffff8aa8ffe00000 R09: 0000000000000000
Mar 08 01:09:45 ns3192824 kernel: R10: 0000000000000000 R11: 0000000000000000 R12: ffff8a9a371d4c80
Mar 08 01:09:45 ns3192824 kernel: R13: 0000000000000000 R14: ffff95ddc1a863e0 R15: ffff8a9a371d4cb8
Mar 08 01:09:45 ns3192824 kernel: FS:  00007f80ad767700(0000) GS:ffff8aa8ffe00000(0000) knlGS:0000000000000000
Mar 08 01:09:45 ns3192824 kernel: CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
Mar 08 01:09:45 ns3192824 kernel: CR2: 0000000000000000 CR3: 00000001733da006 CR4: 00000000007726e0
Mar 08 01:09:45 ns3192824 kernel: DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
Mar 08 01:09:45 ns3192824 kernel: DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Mar 08 01:09:45 ns3192824 kernel: PKRU: 55555554
Mar 08 01:09:45 ns3192824 kernel: Call Trace:
Mar 08 01:09:45 ns3192824 kernel:  <TASK>
Mar 08 01:09:45 ns3192824 kernel:  kvm_arch_vcpu_ioctl_run+0x482/0x1750 [kvm]
Mar 08 01:09:45 ns3192824 kernel:  ? kvm_arch_vcpu_put+0x11c/0x180 [kvm]
Mar 08 01:09:45 ns3192824 kernel:  ? vcpu_put+0x1b/0x30 [kvm]
Mar 08 01:09:45 ns3192824 kernel:  kvm_vcpu_ioctl+0x247/0x5f0 [kvm]
Mar 08 01:09:45 ns3192824 kernel:  ? kvm_arch_vcpu_ioctl_run+0x600/0x1750 [kvm]
Mar 08 01:09:45 ns3192824 kernel:  ? __fget_files+0x86/0xc0
Mar 08 01:09:45 ns3192824 kernel:  __x64_sys_ioctl+0x91/0xc0
Mar 08 01:09:45 ns3192824 kernel:  do_syscall_64+0x61/0xb0
Mar 08 01:09:45 ns3192824 kernel:  ? fire_user_return_notifiers+0x3e/0x50
Mar 08 01:09:45 ns3192824 kernel:  ? exit_to_user_mode_prepare+0x37/0x1b0
Mar 08 01:09:45 ns3192824 kernel:  ? syscall_exit_to_user_mode+0x27/0x50
Mar 08 01:09:45 ns3192824 kernel:  ? do_syscall_64+0x6e/0xb0
Mar 08 01:09:45 ns3192824 kernel:  ? syscall_exit_to_user_mode+0x27/0x50
Mar 08 01:09:45 ns3192824 kernel:  ? do_syscall_64+0x6e/0xb0
Mar 08 01:09:45 ns3192824 kernel:  entry_SYSCALL_64_after_hwframe+0x44/0xae
Mar 08 01:09:45 ns3192824 kernel: RIP: 0033:0x7f80b880bcc7
Mar 08 01:09:45 ns3192824 kernel: Code: 00 00 00 48 8b 05 c9 91 0c 00 64 c7 00 26 00 00 00 48 c7 c0 ff ff ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 b8 10 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d 99 91 0c 00 f7 d8 64 89 01 48
Mar 08 01:09:45 ns3192824 kernel: RSP: 002b:00007f80ad7623c8 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
Mar 08 01:09:45 ns3192824 kernel: RAX: ffffffffffffffda RBX: 000000000000ae80 RCX: 00007f80b880bcc7
Mar 08 01:09:45 ns3192824 kernel: RDX: 0000000000000000 RSI: 000000000000ae80 RDI: 000000000000001b
Mar 08 01:09:45 ns3192824 kernel: RBP: 000056194e103510 R08: 000056194b88cd38 R09: 00000000ffffffff
Mar 08 01:09:45 ns3192824 kernel: R10: 0000000000000001 R11: 0000000000000246 R12: 0000000000000000
Mar 08 01:09:45 ns3192824 kernel: R13: 000056194bce81c0 R14: 0000000000000004 R15: 0000000000000000
Mar 08 01:09:45 ns3192824 kernel:  </TASK>
Mar 08 01:09:45 ns3192824 kernel: ---[ end trace 6a6aae8abcb83693 ]---
Mar 08 01:09:45 ns3192824 kernel: ------------[ cut here ]------------
 
  • Like
Reactions: godsavethequ33n
I confirm the bug. In my case all CTs and Debian VMs start and work fine but Windows VMs seem to be stuck in a loop. I don't even get a console for those VM, however they appear as started in qm list.

I experienced this same exact issue. My Debian VMs and Alpine CT were fine. My windows VM show started and have what appeared to be the same output as you. Crashed my host because I ran out of space on account of the logs filling up rapidly. Booted previous kernel as suggested by Neobin and all is well.
 
Last edited:
  • Like
Reactions: jf2021
Sorry for the late reply. In case you need, here is my config

Code:
# qm config 105
agent: 0
bootdisk: ide0
cores: 4
cpu: host,flags=+hv-tlbflush
ide0: local:105/vm-105-disk-0.raw,size=200G
ide3: none,media=cdrom
localtime: 1
memory: 16384
name: winsrv105
net0: virtio=XX:XX:XX:XX:XX:XX,bridge=vmbr1
net1: virtio=XX:XX:XX:XX:XX:XX,bridge=vmbr2
numa: 0
ostype: win8
protection: 1
scsihw: virtio-scsi-pci
smbios1: uuid=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
sockets: 2
vmgenid: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx

Code:
# lscpu
Architecture:                    x86_64
CPU op-mode(s):                  32-bit, 64-bit
Byte Order:                      Little Endian
Address sizes:                   46 bits physical, 48 bits virtual
CPU(s):                          16
On-line CPU(s) list:             0-15
Thread(s) per core:              2
Core(s) per socket:              8
Socket(s):                       1
NUMA node(s):                    1
Vendor ID:                       GenuineIntel
CPU family:                      6
Model:                           85
Model name:                      Intel(R) Xeon(R) D-2141I CPU @ 2.20GHz
Stepping:                        4
CPU MHz:                         3000.000
CPU max MHz:                     3000,0000
CPU min MHz:                     1000,0000
BogoMIPS:                        4400.00
Virtualization:                  VT-x
L1d cache:                       256 KiB
L1i cache:                       256 KiB
L2 cache:                        8 MiB
L3 cache:                        11 MiB
NUMA node0 CPU(s):               0-15
Vulnerability Itlb multihit:     KVM: Mitigation: Split huge pages
Vulnerability L1tf:              Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
Vulnerability Mds:               Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Meltdown:          Mitigation; PTI
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1:        Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2:        Mitigation; Full generic retpoline, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling
Vulnerability Srbds:             Not affected
Vulnerability Tsx async abort:   Mitigation; Clear CPU buffers; SMT vulnerable
Flags:                           fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_per
                                 fmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2
                                 apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single pti intel_ppin ssbd mba ibrs ibpb stibp
                                 tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel
                                 _pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke md_clear flush_l1d
 
FYI, I already tracked down the offending patch (ref), it seems like a backport missed a preparing patch (ref); currently testing for the minimal invasive change (revert of "incomplete" patch or backport of supporting one).
 
I can confirm this behaviour as well. On PVE I had around 60 GB log files (kernel log and messages) within a couple of minutes with some strange messages like
Code:
Mar 07 19:03:33 proxmox2 kernel:  ib_umad nfsd vfio_pci vfio_virqfd auth_rpcgss irqbypass nfs_acl lockd vfio_iommu_type1 grace vfio drm sunrpc ip_tables x_tables autofs4 btrfs blake2b_generic xor zstd_compress raid6_p>
Mar 07 19:03:33 proxmox2 kernel: CPU: 9 PID: 14229 Comm: kvm Tainted: P        W  O      5.13.19-5-pve #1
Mar 07 19:03:33 proxmox2 kernel: Hardware name: Supermicro Super Server/X10DRC-LN4+, BIOS 3.2 11/19/2019
Mar 07 19:03:33 proxmox2 kernel: RIP: 0010:vmx_sync_pir_to_irr+0xad/0xd0 [kvm_intel]
Mar 07 19:03:33 proxmox2 kernel: Code: 45 ec 41 89 c0 8b 83 00 03 00 00 83 e0 20 85 c0 74 d7 48 8b 45 f0 65 48 2b 04 25 28 00 00 00 75 24 48 8b 5d f8 44 89 c0 c9 c3 <0f> 0b e9 79 ff ff ff f0 80 4b 39 40 8b 83 00 03 00>
Mar 07 19:03:33 proxmox2 kernel: RSP: 0018:ffffa3ab2e0cfd98 EFLAGS: 00010046
Mar 07 19:03:33 proxmox2 kernel: RAX: 0000000000000000 RBX: ffff89bf8e2e8000 RCX: 00007f88e8eb7700
Mar 07 19:03:33 proxmox2 kernel: RDX: 0000000000000000 RSI: 0000000000000001 RDI: ffff89bf8e2e8000
Mar 07 19:03:33 proxmox2 kernel: RBP: ffffa3ab2e0cfdb0 R08: ffff89bdffc80000 R09: 0000000000000000
Mar 07 19:03:33 proxmox2 kernel: R10: 0000000000000001 R11: 0000000000000001 R12: ffff89bf8e2e8000
Mar 07 19:03:33 proxmox2 kernel: R13: 0000000000000000 R14: ffffa3ab2e0923e0 R15: ffff89bf8e2e8038
Mar 07 19:03:33 proxmox2 systemd-journald[628]: Missed 17 kernel messages
Mar 07 19:03:33 proxmox2 kernel: RSP: 002b:00007f88e8eb23c8 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
Mar 07 19:03:33 proxmox2 systemd-journald[628]: Missed 10 kernel messages
Mar 07 19:03:33 proxmox2 kernel:  ib_umad nfsd vfio_pci vfio_virqfd auth_rpcgss irqbypass nfs_acl lockd vfio_iommu_type1 grace vfio drm sunrpc ip_tables x_tables autofs4 btrfs blake2b_generic xor zstd_compress raid6_p>
Mar 07 19:03:33 proxmox2 kernel: CPU: 9 PID: 14229 Comm: kvm Tainted: P        W  O      5.13.19-5-pve #1
Mar 07 19:03:33 proxmox2 kernel: Hardware name: Supermicro Super Server/X10DRC-LN4+, BIOS 3.2 11/19/2019
Mar 07 19:03:33 proxmox2 systemd-journald[628]: Missed 9 kernel messages
Mar 07 19:03:33 proxmox2 kernel: CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
Mar 07 19:03:33 proxmox2 systemd-journald[628]: Missed 25 kernel messages
Mar 07 19:03:33 proxmox2 kernel: Modules linked in: nf_tables veth ebtable_filter ebtables ip_set ip6table_raw iptable_raw ip6table_filter ip6_tables sctp ip6_udp_tunnel udp_tunnel iptable_filter bpfilter 8021q garp m>
Mar 07 19:03:33 proxmox2 systemd-journald[628]: Missed 1 kernel messages
Mar 07 19:03:33 proxmox2 kernel: CPU: 9 PID: 14229 Comm: kvm Tainted: P        W  O      5.13.19-5-pve #1
Mar 07 19:03:33 proxmox2 kernel: Hardware name: Supermicro Super Server/X10DRC-LN4+, BIOS 3.2 11/19/2019
Mar 07 19:03:33 proxmox2 kernel: RIP: 0010:vmx_sync_pir_to_irr+0xad/0xd0 [kvm_intel]
Mar 07 19:03:33 proxmox2 kernel: Code: 45 ec 41 89 c0 8b 83 00 03 00 00 83 e0 20 85 c0 74 d7 48 8b 45 f0 65 48 2b 04 25 28 00 00 00 75 24 48 8b 5d f8 44 89 c0 c9 c3 <0f> 0b e9 79 ff ff ff f0 80 4b 39 40 8b 83 00 03 00>
Mar 07 19:03:33 proxmox2 kernel: RSP: 0018:ffffa3ab2e0cfda8 EFLAGS: 00010046
Mar 07 19:03:33 proxmox2 systemd-journald[628]: Missed 22 kernel messages
....
 
  • Like
Reactions: sok
A updated kernel package pve-kernel-5.13.19-5-pve in version 5.13.19-13 is available now on the pvetest and pve-no-subscription repository, it supersedes the previous one and only includes the backport of a supporting patch, mentioned in my previous reply above.

I could not trigger my reproducer here anymore with that, so feedback welcome.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!