kernel 5.15.30-2 break HPE Smart Array P222

winproof

Well-Known Member
Oct 18, 2017
33
10
48
49
france
Hello

after update to kernel 5.15.30-2, all my vm (debian 10) on one node not boot (apparently disk in read-only), and vm disk "disappear" from local storage in web UI (but in fact still present on disk) .
try to restore a vm on another node, vm boot normally.

reboot the problematic node on previous kernel (5.13.19-6) solve pb

node HP Gen8 Microserv with HPE Smart Array P222

boot log and pve report attached

Does anyone know what's going on? :)

Thanks
 

Attachments

  • logs.txt
    260.6 KB · Views: 7
  • pve1-gen8-pve-report-Tue-10-May-2022-2-15(1).txt
    57.6 KB · Views: 1
Last edited:
repeating error :

Code:
---[ end trace 263c37e0ffd431ba ]---
May 09 19:40:23 pve1-gen8 kernel: DMAR: ERROR: DMA PTE for vPFN 0xb5f91 already set (to b5f91003 not 1ca869803)
------------[ cut here ]------------
May 09 19:40:23 pve1-gen8 kernel: WARNING: CPU: 3 PID: 1782 at drivers/iommu/intel/iommu.c:2391 __domain_mapping.cold+0x175/0x1a3
May 09 19:40:23 pve1-gen8 kernel: Modules linked in: rpcsec_gss_krb5 nfsv4 nfs fscache netfs veth ebtable_filter ebtables ip6table_raw ip6t_REJECT nf_reject_ipv6 ip6table_filter ip6_tables iptable_raw ipt_REJECT nf_reject_ipv4 xt_mark xt_set xt_physdev xt_addrtype xt_comment xt_tcpudp xt_multiport xt_conntrack nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 ip_set_hash_net ip_set sctp ip6_udp_tunnel udp_tunnel iptable_filter bpfilter softdog nfnetlink_log nfnetlink ipmi_ssif intel_rapl_msr intel_rapl_common x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel kvm mgag200 irqbypass drm_kms_helper crct10dif_pclmul ghash_clmulni_intel aesni_intel cec rc_core crypto_simd i2c_algo_bit cryptd fb_sys_fops syscopyarea sysfillrect sysimgblt hpilo rapl intel_cstate ie31200_edac acpi_ipmi ipmi_si ipmi_devintf mac_hid serio_raw ipmi_msghandler acpi_power_meter pcspkr zfs(PO) zunicode(PO) zzstd(O) zlua(O) zavl(PO) icp(PO) zcommon(PO) znvpair(PO) spl(O) vhost_net vhost vhost_iotlb tap ib_iser rdma_cm iw_cm ib_cm ib_core
May 09 19:40:23 pve1-gen8 kernel:  iscsi_tcp nfsd libiscsi_tcp libiscsi auth_rpcgss scsi_transport_iscsi nfs_acl lockd grace drm sunrpc ip_tables x_tables autofs4 btrfs blake2b_generic xor zstd_compress raid6_pq libcrc32c simplefb hid_generic usbhid hid st gpio_ich crc32_pclmul xhci_pci xhci_pci_renesas xhci_hcd psmouse lpc_ich ahci libahci uhci_hcd hpsa ehci_pci tg3 scsi_transport_sas ehci_hcd
May 09 19:40:23 pve1-gen8 kernel: CPU: 3 PID: 1782 Comm: kvm Tainted: P          IO      5.15.30-2-pve #1
May 09 19:40:23 pve1-gen8 kernel: Hardware name: HP ProLiant MicroServer Gen8, BIOS J06 11/02/2015
May 09 19:40:23 pve1-gen8 kernel: RIP: 0010:__domain_mapping.cold+0x175/0x1a3
May 09 19:40:23 pve1-gen8 kernel: Code: 4c 89 ee 48 c7 c7 18 57 85 b5 4c 89 5d b8 e8 a2 c1 fa ff 8b 05 3f ce 22 01 4c 8b 5d b8 85 c0 74 09 83 e8 01 89 05 2e ce 22 01 <0f> 0b e9 68 41 b3 ff 48 63 d1 be 01 00 00 00 48 c7 c7 60 e8 10 b6
May 09 19:40:23 pve1-gen8 kernel: RSP: 0018:ffffb01f0862f2d0 EFLAGS: 00010202
May 09 19:40:23 pve1-gen8 kernel: RAX: 0000000000000004 RBX: ffff9bd7437d8c80 RCX: 0000000000000000
May 09 19:40:23 pve1-gen8 kernel: RDX: 0000000000000000 RSI: ffff9bda76ee0980 RDI: ffff9bda76ee0980
May 09 19:40:23 pve1-gen8 kernel: RBP: ffffb01f0862f350 R08: 0000000000000003 R09: 0000000000000001
May 09 19:40:23 pve1-gen8 kernel: R10: ffff9bd7570a32e0 R11: 00000000001ca868 R12: ffff9bd74234d800
May 09 19:40:23 pve1-gen8 kernel: R13: 00000000000b5f90 R14: 00000001ca868803 R15: 0000000000000008
May 09 19:40:23 pve1-gen8 kernel: FS:  00007f96502b8040(0000) GS:ffff9bda76ec0000(0000) knlGS:0000000000000000
May 09 19:40:23 pve1-gen8 kernel: CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 09 19:40:23 pve1-gen8 kernel: CR2: 0000560a5b440078 CR3: 00000001760f2004 CR4: 00000000001726e0
May 09 19:40:23 pve1-gen8 kernel: Call Trace:
May 09 19:40:23 pve1-gen8 kernel:  <TASK>
May 09 19:40:23 pve1-gen8 kernel:  ? __domain_mapping+0x1db/0x4e0
May 09 19:40:23 pve1-gen8 kernel:  intel_iommu_map_pages+0xdc/0x120
May 09 19:40:23 pve1-gen8 kernel:  __iommu_map+0xda/0x270
May 09 19:40:23 pve1-gen8 kernel:  __iommu_map_sg+0x8e/0x120
May 09 19:40:23 pve1-gen8 kernel:  iommu_map_sg_atomic+0x14/0x20
May 09 19:40:23 pve1-gen8 kernel:  iommu_dma_map_sg+0x348/0x4e0
May 09 19:40:23 pve1-gen8 kernel:  __dma_map_sg_attrs+0x66/0x70
May 09 19:40:23 pve1-gen8 kernel:  dma_map_sg_attrs+0xe/0x20
May 09 19:40:23 pve1-gen8 kernel:  scsi_dma_map+0x39/0x50
May 09 19:40:23 pve1-gen8 kernel:  hpsa_ciss_submit+0xca/0x430 [hpsa]
May 09 19:40:23 pve1-gen8 kernel:  ? sd_init_command+0x2c5/0xe50
May 09 19:40:23 pve1-gen8 kernel:  hpsa_scsi_queue_command+0x1b8/0x240 [hpsa]
May 09 19:40:23 pve1-gen8 kernel:  scsi_queue_rq+0x3dd/0xbe0
May 09 19:40:23 pve1-gen8 kernel:  blk_mq_dispatch_rq_list+0x13c/0x800
May 09 19:40:23 pve1-gen8 kernel:  ? sbitmap_get+0xb4/0x1e0
May 09 19:40:23 pve1-gen8 kernel:  ? sbitmap_get+0x121/0x1e0
May 09 19:40:23 pve1-gen8 kernel:  __blk_mq_do_dispatch_sched+0xba/0x2d0
May 09 19:40:23 pve1-gen8 kernel:  __blk_mq_sched_dispatch_requests+0x104/0x150
May 09 19:40:23 pve1-gen8 kernel:  blk_mq_sched_dispatch_requests+0x35/0x60
May 09 19:40:23 pve1-gen8 kernel:  __blk_mq_run_hw_queue+0x34/0xb0
May 09 19:40:23 pve1-gen8 kernel:  __blk_mq_delay_run_hw_queue+0x162/0x170
May 09 19:40:23 pve1-gen8 kernel:  blk_mq_run_hw_queue+0x83/0x120
May 09 19:40:23 pve1-gen8 kernel:  blk_mq_sched_insert_requests+0x69/0xf0
May 09 19:40:23 pve1-gen8 kernel:  blk_mq_flush_plug_list+0x103/0x1c0
May 09 19:40:23 pve1-gen8 kernel:  blk_flush_plug_list+0xdd/0x100
May 09 19:40:23 pve1-gen8 kernel:  blk_finish_plug+0x29/0x40
May 09 19:40:23 pve1-gen8 kernel:  __iomap_dio_rw+0x559/0x830
May 09 19:40:23 pve1-gen8 kernel:  iomap_dio_rw+0xe/0x30
May 09 19:40:23 pve1-gen8 kernel:  ext4_file_read_iter+0x10c/0x180
May 09 19:40:23 pve1-gen8 kernel:  io_read+0xec/0x4c0
May 09 19:40:23 pve1-gen8 kernel:  ? __pollwait+0xd0/0xd0
May 09 19:40:23 pve1-gen8 kernel:  io_issue_sqe+0xeb3/0x1dc0
May 09 19:40:23 pve1-gen8 kernel:  ? __pollwait+0xd0/0xd0
May 09 19:40:23 pve1-gen8 kernel:  ? __pollwait+0xd0/0xd0
May 09 19:40:23 pve1-gen8 kernel:  __io_queue_sqe+0x35/0x310
May 09 19:40:23 pve1-gen8 kernel:  ? fget+0x2a/0x30
May 09 19:40:23 pve1-gen8 kernel:  io_submit_sqes+0xf93/0x1b30
May 09 19:40:23 pve1-gen8 kernel:  ? __fget_files+0x86/0xc0
May 09 19:40:23 pve1-gen8 kernel:  __do_sys_io_uring_enter+0x515/0x990
May 09 19:40:23 pve1-gen8 kernel:  ? __do_sys_io_uring_enter+0x515/0x990
May 09 19:40:23 pve1-gen8 kernel:  ? exit_to_user_mode_prepare+0x37/0x1b0
May 09 19:40:23 pve1-gen8 kernel:  ? syscall_exit_to_user_mode+0x27/0x50
May 09 19:40:23 pve1-gen8 kernel:  __x64_sys_io_uring_enter+0x29/0x30
May 09 19:40:23 pve1-gen8 kernel:  do_syscall_64+0x5c/0xc0
May 09 19:40:23 pve1-gen8 kernel:  ? do_syscall_64+0x69/0xc0
May 09 19:40:23 pve1-gen8 kernel:  ? __x64_sys_read+0x1a/0x20
May 09 19:40:23 pve1-gen8 kernel:  ? exit_to_user_mode_prepare+0x37/0x1b0
May 09 19:40:23 pve1-gen8 kernel:  ? syscall_exit_to_user_mode+0x27/0x50
May 09 19:40:23 pve1-gen8 kernel:  ? __x64_sys_io_uring_enter+0x29/0x30
May 09 19:40:23 pve1-gen8 kernel:  ? do_syscall_64+0x69/0xc0
May 09 19:40:23 pve1-gen8 kernel:  ? common_interrupt+0x55/0xa0
May 09 19:40:23 pve1-gen8 kernel:  ? asm_common_interrupt+0x8/0x40
May 09 19:40:23 pve1-gen8 kernel:  entry_SYSCALL_64_after_hwframe+0x44/0xae
May 09 19:40:23 pve1-gen8 kernel: RIP: 0033:0x7f965aed19b9
May 09 19:40:23 pve1-gen8 kernel: Code: 00 c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d a7 54 0c 00 f7 d8 64 89 01 48
May 09 19:40:23 pve1-gen8 kernel: RSP: 002b:00007ffe7131bf38 EFLAGS: 00000216 ORIG_RAX: 00000000000001aa
May 09 19:40:23 pve1-gen8 kernel: RAX: ffffffffffffffda RBX: 00007f943e5fc800 RCX: 00007f965aed19b9
May 09 19:40:23 pve1-gen8 kernel: RDX: 0000000000000000 RSI: 0000000000000001 RDI: 0000000000000011
May 09 19:40:23 pve1-gen8 kernel: RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000008
May 09 19:40:23 pve1-gen8 kernel: R10: 0000000000000000 R11: 0000000000000216 R12: 0000563a5143bba8
May 09 19:40:23 pve1-gen8 kernel: R13: 0000563a5143bc60 R14: 0000563a5143bba0 R15: 0000000000000001
May 09 19:40:23 pve1-gen8 kernel:  </TASK>
May 09 19:40:23 pve1-gen8 kernel: ---[ end trace 263c37e0ffd431ba ]---
 
Last edited:
hm - on a hunch could you try and see if adding either:
* intel_iommu=off
or
* intremap=off

fixes the issue when booting into the 5.15 kernel series?
(also try if pve-kernel-5.15.35-1-pve (a new version) fixes the issue)

I hope this helps!
 
We have one HP dl380 g8 here (with a smartarray P420) - and I could not reproduce those issues (with a debian VM running stress-ng).
 
sounds good - one further suggestion - if possible make sure you've installed the latest firmware for all components on the system
 
Jesus Christ, this took almost 2 full days to discover.

I can also confirm the issue and the solution (intel_iommu=off).

Tested on HPE Proliant ML310e Gen8 v2 with the SmartArray P420 (pci-e version).
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!