Opt-in Linux 6.11 Kernel for Proxmox VE 8 available on test & no-subscription

Hi,

after updating kernel to 6.11 I have a problem with accessing iDRAC in DELL 12th and 13th generation servers (iDRAC 7 & 8). When the system starts the virtual console stops responding and the entire iDRAC interface stops working. Rebooting back to kernel 6.8 solves the problem. I checked it on 10 different servers, same problem everywhere.

iDRAC logs:
RAC0182 The iDRAC firmware was rebooted with the following reason: watchdog.
RAC0708 Previous reboot was due to a firmware watchdog timeout.

Is there anything I can check or change to solve the problem and use the 6.11 kernel?

Regards,
Bartek

I am also experienced these exact same issues with the iDRAC 8 on Dell R730xd and R630 servers using the 6.11 kernel.
The iDRAC works perfectly on same machines with the 6.8 kernel. Reverting the kernel to 6.8 also works.
The same messages are logged in the iDRAC logs as the original poster.

My suspicion is this is related to the video pass through to the iDRAC virtual console function. Occasionally, i have been able to get the main iDRAC panel to display after multiple browser refreshes. The virtual console is always blank.

BIOS version is 2.19.0
iDRAC version is 2.86.86.86

These are the current versions of the firmware for these machines.
 
I am also experienced these exact same issues with the iDRAC 8 on Dell R730xd and R630 servers using the 6.11 kernel.
The iDRAC works perfectly on same machines with the 6.8 kernel. Reverting the kernel to 6.8 also works.
The same messages are logged in the iDRAC logs as the original poster.

My suspicion is this is related to the video pass through to the iDRAC virtual console function. Occasionally, i have been able to get the main iDRAC panel to display after multiple browser refreshes. The virtual console is always blank.

BIOS version is 2.19.0
iDRAC version is 2.86.86.86

These are the current versions of the firmware for these machines.
Same here!!!, Dell R420, lastest firmwares and bios everywhere, never happend me before.
 
Hello,
I've been getting strange messages and I hadn't seen any until now.
Has anyone had a similar problem ?

Bash:
[Dec30 03:16] smartctl: page allocation failure: order:0, mode:0xcc4(GFP_KERNEL|GFP_DMA32), nodemask=(null),cpuset=sd5.service,mems_allowed=0-1
[  +0.000406] CPU: 6 UID: 0 PID: 604375 Comm: smartctl Tainted: P           O       6.11.0-2-pve #1
[  +0.000212] Tainted: [P]=PROPRIETARY_MODULE, [O]=OOT_MODULE
[  +0.000209] Hardware name: Supermicro X9DR3-F/X9DR3-F, BIOS 3.4 06/30/2020
[  +0.000222] Call Trace:
[  +0.000214]  <TASK>
[  +0.000216]  dump_stack_lvl+0x76/0xa0
[  +0.000222]  dump_stack+0x10/0x20
[  +0.000217]  warn_alloc+0x173/0x1f0
[  +0.000213]  __alloc_pages_noprof+0x1175/0x12d0
[  +0.000209]  __dma_direct_alloc_pages.constprop.0+0xa3/0x250
[  +0.000212]  dma_direct_alloc+0xa3/0x280
[  +0.000210]  dma_alloc_attrs+0x76/0xc0
[  +0.000225]  megasas_mgmt_fw_ioctl+0x2b2/0x9c0 [megaraid_sas]
[  +0.000226]  megasas_mgmt_ioctl_fw.constprop.0+0x249/0x2c0 [megaraid_sas]
[  +0.000220]  megasas_mgmt_ioctl+0x28/0x50 [megaraid_sas]
[  +0.000222]  __x64_sys_ioctl+0xa0/0xf0
[  +0.000219]  x64_sys_call+0xb31/0x24e0
[  +0.000222]  do_syscall_64+0x7e/0x170
[  +0.000223]  ? handle_softirqs+0xd8/0x2f0
[  +0.000236]  ? irqentry_exit_to_user_mode+0x43/0x250
[  +0.000236]  ? irqentry_exit+0x43/0x50
[  +0.000233]  ? common_interrupt+0x64/0xe0
[  +0.000238]  entry_SYSCALL_64_after_hwframe+0x76/0x7e
[  +0.000246] RIP: 0033:0x732aad71ccdb
[  +0.000239] Code: 00 48 89 44 24 18 31 c0 48 8d 44 24 60 c7 04 24 10 00 00 00 48 89 44 24 08 48 8d 44 24 20 48 89 44 24 10 b8 10 00 00 00 0f 05 <89> c2 3d 00 f0 ff ff 77 1c 48 8b 44 24 18 64 48 2b 04 25 28 00 00
[  +0.000551] RSP: 002b:00007ffeaca841b0 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
[  +0.000263] RAX: ffffffffffffffda RBX: 00005cb64240cfd0 RCX: 0000732aad71ccdb
[  +0.000275] RDX: 00007ffeaca84210 RSI: 00000000c1944d01 RDI: 0000000000000003
[  +0.000267] RBP: 0000732aada436b8 R08: 0000000000000010 R09: 00007ffeaca845f0
[  +0.000270] R10: 0000000000000000 R11: 0000000000000246 R12: 00007ffeaca84210
[  +0.000273] R13: 00005cb6423d8a80 R14: 00007ffeaca845a0 R15: 00005cb64240cfd0
[  +0.000286]  </TASK>
[  +0.000292] Mem-Info:
[  +0.000286] active_anon:17540213 inactive_anon:9494927 isolated_anon:0
               active_file:56932 inactive_file:36209743 isolated_file:0
               unevictable:96792 dirty:39916 writeback:7026
               slab_reclaimable:1173951 slab_unreclaimable:108029
               mapped:71232 shmem:14315 pagetables:67065
               sec_pagetables:13993 bounce:0
               kernel_misc_reclaimable:0
               free:496159 free_pcp:2458 free_cma:0
[  +0.002591] Node 0 active_anon:35936204kB inactive_anon:37979648kB active_file:768kB inactive_file:52110004kB unevictable:358544kB isolated(anon):0kB isolated(file):0kB mapped:175040kB dirty:2620kB writeback:2992kB shmem:43740kB shmem>
[  +0.001285] Node 0 DMA free:560kB boost:2048kB min:2052kB low:2064kB high:2076kB reserved_highatomic:0KB active_anon:4236kB inactive_anon:4112kB active_file:0kB inactive_file:2116kB unevictable:0kB writepending:1936kB present:15984kB >
[  +0.001003] lowmem_reserve[]: 0 1883 128805 0 0
[  +0.000410] Node 0 DMA32 free:1572kB boost:6768kB min:7424kB low:9352kB high:11280kB reserved_highatomic:0KB active_anon:290616kB inactive_anon:95948kB active_file:0kB inactive_file:1558796kB unevictable:4kB writepending:2756kB presen>
[  +0.001459] lowmem_reserve[]: 0 0 126922 0 0
[  +0.000438] Node 0 DMA: 0*4kB 0*8kB 3*16kB (U) 16*32kB (U) 0*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 560kB
[  +0.000643] Node 0 DMA32: 8*4kB (ME) 9*8kB (ME) 8*16kB (E) 41*32kB (E) 0*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 1544kB
[  +0.000564] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
[  +0.000589] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
[  +0.000619] Node 1 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
[  +0.000615] Node 1 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
[  +0.000535] 36286427 total pagecache pages
[  +0.000660] 0 pages in swap cache
[  +0.000628] Free swap  = 0kB
[  +0.000503] Total swap = 0kB
[  +0.000570] 67100293 pages RAM
[  +0.000612] 0 pages HighMem/MovableOnly
[  +0.000456] 1078751 pages reserved
[  +0.000488] 0 pages hwpoisoned
[  +0.000627] megaraid_sas 0000:82:00.0: Failed to alloc kernel SGL buffer for IOCTL
[  +0.017064] megaraid_sas 0000:82:00.0: Failed to alloc kernel SGL buffer for IOCTL
[  +0.011084] megaraid_sas 0000:82:00.0: Failed to alloc kernel SGL buffer for IOCTL
[  +0.012297] megaraid_sas 0000:82:00.0: Failed to alloc kernel SGL buffer for IOCTL
[Dec30 03:18] warn_alloc: 3 callbacks suppressed
[  +0.000009] storcli_x64: page allocation failure: order:0, mode:0xcc4(GFP_KERNEL|GFP_DMA32), nodemask=(null),cpuset=sd5.service,mems_allowed=0-1
[  +0.001123] CPU: 15 UID: 0 PID: 605622 Comm: storcli_x64 Tainted: P           O       6.11.0-2-pve #1
[  +0.000487] Tainted: [P]=PROPRIETARY_MODULE, [O]=OOT_MODULE
[  +0.000481] Hardware name: Supermicro X9DR3-F/X9DR3-F, BIOS 3.4 06/30/2020
[  +0.000484] Call Trace:
[  +0.000481]  <TASK>
[  +0.000484]  dump_stack_lvl+0x76/0xa0
[  +0.000524]  dump_stack+0x10/0x20
[  +0.000481]  warn_alloc+0x173/0x1f0
[  +0.000478]  __alloc_pages_noprof+0x1175/0x12d0
[  +0.000476]  __dma_direct_alloc_pages.constprop.0+0xa3/0x250
[  +0.000477]  dma_direct_alloc+0xa3/0x280
[  +0.000473]  dma_alloc_attrs+0x76/0xc0
[  +0.000472]  megasas_mgmt_fw_ioctl+0x2b2/0x9c0 [megaraid_sas]
[  +0.000488]  megasas_mgmt_ioctl_fw.constprop.0+0x249/0x2c0 [megaraid_sas]
[  +0.000483]  megasas_mgmt_ioctl+0x28/0x50 [megaraid_sas]
[  +0.000482]  __x64_sys_ioctl+0xa0/0xf0
[  +0.000481]  x64_sys_call+0xb31/0x24e0
[  +0.000489]  do_syscall_64+0x7e/0x170
[  +0.000479]  ? do_user_addr_fault+0x5ec/0x830
[  +0.000491]  ? irqentry_exit_to_user_mode+0x43/0x250
[  +0.000478]  ? irqentry_exit+0x43/0x50
[  +0.000449]  ? exc_page_fault+0x96/0x1e0
[  +0.000445]  entry_SYSCALL_64_after_hwframe+0x76/0x7e
[  +0.000448] RIP: 0033:0x9cae87
[  +0.000442] Code: 44 00 00 31 ff e8 39 30 02 00 4c 8b 25 d2 d4 84 00 85 c0 79 94 eb af 66 2e 0f 1f 84 00 00 00 00 00 66 90 b8 10 00 00 00 0f 05 <48> 3d 01 f0 ff ff 0f 83 bd a1 00 00 c3 66 2e 0f 1f 84 00 00 00 00
[  +0.000938] RSP: 002b:00007ffd25db5568 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
[  +0.000474] RAX: ffffffffffffffda RBX: 00007ffd25db5830 RCX: 00000000009cae87
[  +0.000472] RDX: 0000000004faf060 RSI: 00000000c1944d01 RDI: 0000000000000003
[  +0.000477] RBP: 00007ffd25db55a0 R08: 0000000004fcb160 R09: 0000000004fb1270
[  +0.000473] R10: 0000000000beb2d0 R11: 0000000000000246 R12: 0000000000000002
[  +0.000477] R13: 000000000076cdcc R14: 0000000000000001 R15: 0000000000000001
[  +0.000471]  </TASK>
[  +0.000481] Mem-Info:
[  +0.000467] active_anon:18576751 inactive_anon:8457819 isolated_anon:0
               active_file:55193 inactive_file:36193861 isolated_file:0
               unevictable:96792 dirty:1998 writeback:15828
               slab_reclaimable:1202765 slab_unreclaimable:107665
               mapped:72406 shmem:14315 pagetables:67070
               sec_pagetables:14425 bounce:0
               kernel_misc_reclaimable:0
               free:496619 free_pcp:732 free_cma:0
[  +0.003619] Node 0 active_anon:40222264kB inactive_anon:33831212kB active_file:924kB inactive_file:51982664kB unevictable:358544kB isolated(anon):0kB isolated(file):0kB mapped:179648kB dirty:424kB writeback:19876kB shmem:34376kB shmem>
[  +0.001278] Node 0 DMA free:976kB boost:2048kB min:2052kB low:2064kB high:2076kB reserved_highatomic:0KB active_anon:6392kB inactive_anon:2120kB active_file:132kB inactive_file:1620kB unevictable:0kB writepending:1380kB present:15984k>
[  +0.000874] lowmem_reserve[]: 0 1883 128805 0 0
[  +0.000436] Node 0 DMA32 free:1988kB boost:6768kB min:7424kB low:9352kB high:11280kB reserved_highatomic:0KB active_anon:290224kB inactive_anon:83624kB active_file:0kB inactive_file:1577492kB unevictable:4kB writepending:11264kB prese>
[  +0.001347] lowmem_reserve[]: 0 0 126922 0 0
[  +0.000456] Node 0 DMA: 12*4kB (U) 12*8kB (U) 12*16kB (U) 12*32kB (U) 4*64kB (U) 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 976kB
[  +0.000491] Node 0 DMA32: 1*4kB (E) 0*8kB 0*16kB 62*32kB (UE) 0*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 1988kB
[  +0.000491] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
[  +0.000491] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
[  +0.000480] Node 1 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
[  +0.000486] Node 1 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
[  +0.000478] 36268950 total pagecache pages
[  +0.000484] 0 pages in swap cache
[  +0.000478] Free swap  = 0kB
[  +0.000483] Total swap = 0kB
[  +0.000474] 67100293 pages RAM
[  +0.000473] 0 pages HighMem/MovableOnly
[  +0.000463] 1078751 pages reserved
[  +0.000453] 0 pages hwpoisoned
[  +0.000455] megaraid_sas 0000:82:00.0: Failed to alloc kernel SGL buffer for IOCTL
[  +0.012169] megaraid_sas 0000:82:00.0: Failed to alloc kernel SGL buffer for IOCTL
[  +0.007112] megaraid_sas 0000:82:00.0: Failed to alloc kernel SGL buffer for IOCTL
[  +0.007578] megaraid_sas 0000:82:00.0: Failed to alloc kernel SGL buffer for IOCTL
[  +0.007481] megaraid_sas 0000:82:00.0: Failed to alloc kernel SGL buffer for IOCTL
[  +0.007381] megaraid_sas 0000:82:00.0: Failed to alloc kernel SGL buffer for IOCTL
[  +0.008749] megaraid_sas 0000:82:00.0: Failed to alloc kernel SGL buffer for IOCTL
[  +0.007046] megaraid_sas 0000:82:00.0: Failed to alloc kernel SGL buffer for IOCTL
[  +0.007505] megaraid_sas 0000:82:00.0: Failed to alloc kernel SGL buffer for IOCTL
[  +0.007524] megaraid_sas 0000:82:00.0: Failed to alloc kernel SGL buffer for IOCTL
[Dec30 03:25] warn_alloc: 9 callbacks suppressed
[  +0.000007] smartctl: page allocation failure: order:0, mode:0xcc4(GFP_KERNEL|GFP_DMA32), nodemask=(null),cpuset=cron.service,mems_allowed=0-1
[  +0.001137] CPU: 15 UID: 0 PID: 609724 Comm: smartctl Tainted: P           O       6.11.0-2-pve #1
[  +0.000484] Tainted: [P]=PROPRIETARY_MODULE, [O]=OOT_MODULE
[  +0.000476] Hardware name: Supermicro X9DR3-F/X9DR3-F, BIOS 3.4 06/30/2020
[  +0.000482] Call Trace:
[  +0.000480]  <TASK>
[  +0.000479]  dump_stack_lvl+0x76/0xa0
[  +0.000479]  dump_stack+0x10/0x20
[  +0.000483]  warn_alloc+0x173/0x1f0
[  +0.000539]  __alloc_pages_noprof+0x1175/0x12d0
[  +0.000473]  __dma_direct_alloc_pages.constprop.0+0xa3/0x250
[  +0.000482]  dma_direct_alloc+0xa3/0x280
[  +0.000470]  dma_alloc_attrs+0x76/0xc0
[  +0.000470]  megasas_mgmt_fw_ioctl+0x2b2/0x9c0 [megaraid_sas]
[  +0.000484]  megasas_mgmt_ioctl_fw.constprop.0+0x249/0x2c0 [megaraid_sas]
[  +0.000523]  megasas_mgmt_ioctl+0x28/0x50 [megaraid_sas]
[  +0.000480]  __x64_sys_ioctl+0xa0/0xf0
[  +0.000477]  x64_sys_call+0xb31/0x24e0
[  +0.000478]  do_syscall_64+0x7e/0x170
[  +0.000475]  ? __handle_mm_fault+0x83f/0x1120
[  +0.000477]  ? __count_memcg_events+0x7d/0x130
[  +0.000462]  ? count_memcg_events.constprop.0+0x2a/0x50
[  +0.000506]  ? handle_mm_fault+0xae/0x360
[  +0.000443]  ? do_user_addr_fault+0x5ec/0x830
[  +0.000441]  ? irqentry_exit_to_user_mode+0x43/0x250
[  +0.000441]  ? irqentry_exit+0x43/0x50
[  +0.000436]  ? exc_page_fault+0x96/0x1e0
[  +0.000434]  entry_SYSCALL_64_after_hwframe+0x76/0x7e
[  +0.000442] RIP: 0033:0x784574f1ccdb
[  +0.000472] Code: 00 48 89 44 24 18 31 c0 48 8d 44 24 60 c7 04 24 10 00 00 00 48 89 44 24 08 48 8d 44 24 20 48 89 44 24 10 b8 10 00 00 00 0f 05 <89> c2 3d 00 f0 ff ff 77 1c 48 8b 44 24 18 64 48 2b 04 25 28 00 00
[  +0.000947] RSP: 002b:00007ffc53c3bca0 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
[  +0.000467] RAX: ffffffffffffffda RBX: 00005da75704ffd0 RCX: 0000784574f1ccdb
[  +0.000466] RDX: 00007ffc53c3bd00 RSI: 00000000c1944d01 RDI: 0000000000000003
[  +0.000467] RBP: 00007845752bb6b8 R08: 0000000000000010 R09: 00007ffc53c3c0e0
[  +0.000467] R10: 0000000000000000 R11: 0000000000000246 R12: 00007ffc53c3bd00
[  +0.000483] R13: 00005da75701ba80 R14: 00007ffc53c3c090 R15: 00005da75704ffd0
[  +0.000473]  </TASK>
[  +0.000487] Mem-Info:
[  +0.000459] active_anon:18938645 inactive_anon:8097626 isolated_anon:0
               active_file:76700 inactive_file:36123230 isolated_file:0
               unevictable:96792 dirty:34394 writeback:13906
               slab_reclaimable:1251694 slab_unreclaimable:107594
               mapped:63652 shmem:14315 pagetables:67261
               sec_pagetables:14213 bounce:0
               kernel_misc_reclaimable:0
               free:490110 free_pcp:1027 free_cma:0
[  +0.003327] Node 0 active_anon:41667176kB inactive_anon:32390440kB active_file:264kB inactive_file:51905704kB unevictable:358544kB isolated(anon):0kB isolated(file):0kB mapped:142288kB dirty:12496kB writeback:12kB shmem:33972kB shmem_>
[  +0.001191] Node 0 DMA free:1196kB boost:2048kB min:2052kB low:2064kB high:2076kB reserved_highatomic:0KB active_anon:4296kB inactive_anon:4272kB active_file:0kB inactive_file:1464kB unevictable:0kB writepending:1356kB present:15984kB>
[  +0.000890] lowmem_reserve[]: 0 1883 128805 0 0
[  +0.000430] Node 0 DMA32 free:1716kB boost:6768kB min:7424kB low:9352kB high:11280kB reserved_highatomic:0KB active_anon:274660kB inactive_anon:91492kB active_file:0kB inactive_file:1572692kB unevictable:4kB writepending:6848kB presen>
[  +0.001356] lowmem_reserve[]: 0 0 126922 0 0
[  +0.000459] Node 0 DMA: 19*4kB (M) 26*8kB (UM) 17*16kB (UM) 20*32kB (UM) 0*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 1196kB
[  +0.000494] Node 0 DMA32: 50*4kB (ME) 50*8kB (ME) 49*16kB (ME) 11*32kB (ME) 0*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 1736kB
[  +0.000563] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
[  +0.000500] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
[  +0.000491] Node 1 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
[  +0.000495] Node 1 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
[  +0.000484] 36219964 total pagecache pages
[  +0.000483] 0 pages in swap cache
[  +0.000502] Free swap  = 0kB
[  +0.000467] Total swap = 0kB
[  +0.000455] 67100293 pages RAM
[  +0.000451] 0 pages HighMem/MovableOnly
[  +0.000456] 1078751 pages reserved
[  +0.000452] 0 pages hwpoisoned
[  +0.000462] megaraid_sas 0000:82:00.0: Failed to alloc kernel SGL buffer for IOCTL
[  +0.062413] megaraid_sas 0000:82:00.0: Failed to alloc kernel SGL buffer for IOCTL
[  +0.008672] megaraid_sas 0000:82:00.0: Failed to alloc kernel SGL buffer for IOCTL
[  +0.007138] megaraid_sas 0000:82:00.0: Failed to alloc kernel SGL buffer for IOCTL
[  +0.028919] megaraid_sas 0000:82:00.0: Failed to alloc kernel SGL buffer for IOCTL
[  +0.023431] megaraid_sas 0000:82:00.0: Failed to alloc kernel SGL buffer for IOCTL
[  +0.023524] megaraid_sas 0000:82:00.0: Failed to alloc kernel SGL buffer for IOCTL
[  +0.024735] megaraid_sas 0000:82:00.0: Failed to alloc kernel SGL buffer for IOCTL
[  +0.032503] megaraid_sas 0000:82:00.0: Failed to alloc kernel SGL buffer for IOCTL

The server hasn't stopped working, but I'm worried.
 
Due to the fact that the message is long, I could not add the output from pveversion.
I added pveversion -v:

Bash:
proxmox-ve: 8.3.0 (running kernel: 6.11.0-2-pve)
pve-manager: 8.3.2 (running version: 8.3.2/3e76eec21c4a14a7)
proxmox-kernel-helper: 8.1.0
proxmox-kernel-6.11.0-2-pve-signed: 6.11.0-2
proxmox-kernel-6.8: 6.8.12-5
proxmox-kernel-6.8.12-5-pve-signed: 6.8.12-5
proxmox-kernel-6.8.8-3-pve-signed: 6.8.8-3
proxmox-kernel-6.5.13-6-pve-signed: 6.5.13-6
proxmox-kernel-6.5: 6.5.13-6
proxmox-kernel-6.5.11-4-pve-signed: 6.5.11-4
ceph-fuse: 16.2.15+ds-0+deb12u1
corosync: 3.1.7-pve3
criu: 3.17.1-2
frr-pythontools: 8.5.2-1+pve1
glusterfs-client: 10.5-1
ifupdown2: 3.2.0-1+pmx11
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-5
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.1
libproxmox-backup-qemu0: 1.4.1
libproxmox-rs-perl: 0.3.4
libpve-access-control: 8.2.0
libpve-apiclient-perl: 3.3.2
libpve-cluster-api-perl: 8.0.10
libpve-cluster-perl: 8.0.10
libpve-common-perl: 8.2.9
libpve-guest-common-perl: 5.1.6
libpve-http-server-perl: 5.1.2
libpve-network-perl: 0.10.0
libpve-rs-perl: 0.9.1
libpve-storage-perl: 8.3.3
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 6.0.0-1
lxcfs: 6.0.0-pve2
novnc-pve: 1.5.0-1
openvswitch-switch: 3.1.0-2+deb12u1
proxmox-backup-client: 3.3.2-1
proxmox-backup-file-restore: 3.3.2-2
proxmox-firewall: 0.6.0
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.3.1
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.7
proxmox-widget-toolkit: 4.3.3
pve-cluster: 8.0.10
pve-container: 5.2.3
pve-docs: 8.3.1
pve-edk2-firmware: 4.2023.08-4
pve-esxi-import-tools: 0.7.2
pve-firewall: 5.1.0
pve-firmware: 3.14-2
pve-ha-manager: 4.0.6
pve-i18n: 3.3.2
pve-qemu-kvm: 9.0.2-4
pve-xtermjs: 5.3.0-3
qemu-server: 8.3.3
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.6-pve1
 
I am also experienced these exact same issues with the iDRAC 8 on Dell R730xd and R630 servers using the 6.11 kernel.
The iDRAC works perfectly on same machines with the 6.8 kernel. Reverting the kernel to 6.8 also works.
The same messages are logged in the iDRAC logs as the original poster.

My suspicion is this is related to the video pass through to the iDRAC virtual console function. Occasionally, i have been able to get the main iDRAC panel to display after multiple browser refreshes. The virtual console is always blank.

BIOS version is 2.19.0
iDRAC version is 2.86.86.86

These are the current versions of the firmware for these machines.
did you install

Dell iDRAC Service Module ?​

 
I have not. The iDRAC Service Module has not previously been necessary for proper operation on any previous version of Proxmox 7.x or 8.x, or any version of RHEL or compatibles. I can certainly give this a try. But this would also indicate a change in the kernel which is causing this to now be necessary for proper operation.
 
Last edited:
I am also experienced these exact same issues with the iDRAC 8 on Dell R730xd and R630 servers using the 6.11 kernel.
The iDRAC works perfectly on same machines with the 6.8 kernel. Reverting the kernel to 6.8 also works.
The same messages are logged in the iDRAC logs as the original poster.

booted into kernel 6.11 on a single-node system Dell R630 - and with iDrac Express (so no virtual console, only serial) - System seems running fine so far

BIOS version is 2.19.0
iDRAC version is 2.86.86.86
for comparison:
iDRAC version: 2.84.84.84
BIOS version: 2.17.0

* Are the affected systems in a cluster?
* Is the Automated System Recovery Option enabled (iDrac Settings->Network->Services)? - or do you have any other setting regarding watchdogs enabled? (haven't had much recent work with Dell Servers, so my memory is a bit rusty)
* are any watchdog modules loaded? (`lsmod`)
 
've been getting strange messages and I hadn't seen any until now.
Has anyone had a similar problem ?
[+0.000225] megasas_mgmt_fw_ioctl+0x2b2/0x9c0 [megaraid_sas]
[ +0.000226] megasas_mgmt_ioctl_fw.constprop.0+0x249/0x2c0
[megaraid_sas] [ +0.000220] megasas_mgmt_ioctl+0x28/0x50 [megaraid_sas]
These might indicate an issue with loading the firmware of the megaraid controller

Check if there's an update for the BIOS of the system (3.4 06/30/2020 looks like there might be), and if there are updates for the MegaRAID card.
storcli_x64:
do you have any third-party software on that system (maybe even a monitoring script) ? - Then that might be the culprit... (check if there's an update for that as well)

Does the system crash or continue to run despite the messages?
 
These might indicate an issue with loading the firmware of the megaraid controller

Check if there's an update for the BIOS of the system (3.4 06/30/2020 looks like there might be), and if there are updates for the MegaRAID card.

do you have any third-party software on that system (maybe even a monitoring script) ? - Then that might be the culprit... (check if there's an update for that as well)

Does the system crash or continue to run despite the messages?
Hello,

Thanks for your reply.
The bios has been updated to the latest version.
I have updated storcli, FW and the script to monitor the raid controller and disk arrays, but I doubt it is from that. I check regularly for newer versions.

The system after registering these messages has continued to work normally.
But due to an NFS related hang, I am using kernel 6.5.13-6-pve. For the time being all works normally with it. A while ago I also tested kernel 6.8... and there I didn't have a problem with the raid controller.

Best regards,
 
I have updated storcli, FW and the script to monitor the raid controller and disk arrays, but I doubt it is from that. I
Do the messages continue to happen if you disable the checking-script?
My (blind) guess is that the check-script or storcli is not yet adapted to kernel 6.11

But due to an NFS related hang, I am using kernel 6.5.13-6-pve. For the time being all works normally with it. A while ago I also tested kernel 6.8... and there I didn't have a problem with the raid controller.
Why use 6.5.13 if 6.8 works as well? (out of curiosity - and if there's a reason - please open a new thread to not have everything in the 6.11 announcement thread).
 
Do the messages continue to happen if you disable the checking-script?
My (blind) guess is that the check-script or storcli is not yet adapted to kernel 6.11
I didn't try it, the next day the server got stuck due to NFS shutdown and I switched to kernel 6.5.13-6-pve.

Why use 6.5.13 if 6.8 works as well? (out of curiosity - and if there's a reason - please open a new thread to not have everything in the 6.11 announcement thread).


Actually only kernel 6.5.13-6-pve works for me. Anything above it gives me a problem with NFS.
I will wait for a newer version of kernel 6.11.... to try. As far as I know the NFS problem is solved in 6.10, but it persists for me.

Otherwise, I have an idea what works "strangely" in kernel versions higher than 6.5. And maybe it creates an NFS problem, at least for me. But I think that's for another thread.

To the colleagues with IPMI problem on Dell, I have a recollection that I have observed a similar effect on systemrescuecd and IPMI (not sure if it was Dell or Supermicro) and unfortunately I don't remember which of the latest versions creates this problem to reproduce. Perhaps settings in /etc/default/grub may help.

P.S..
On my friends with Proxmox with kernel 6.11.0-2-pve, with controller LSI 9361-8i with versions of FW, storcli and monitoring script as mine before I updated the versions with me, do not have my problem about which I wrote earlier. Motherboards are: X8DTL and X8DT3
 
Last edited:
We recently uploaded a 6.11 kernel into our repositories. The current 6.8 kernel will stay the default on the Proxmox VE 8 series, the newly introduced 6.11 kernel is an option.
The 6.11 based kernel may be useful for some (especially newer) setups, for example if there is improved hardware support that has not yet been backported to 6.8.
This follows our tradition of upgrading the Proxmox VE kernel to match the current Ubuntu version until we reach an (Ubuntu) LTS release, like the 6.8 kernel is, and then provide newer kernels as opt-in. The 6.11 kernel is based on the Ubuntu 24.10 Oracular release.

We have run this kernel on some parts of our test setups over the last few days without any notable issues, for production setups we still recommend keeping the 6.8 based kernel, or test on similar hardware/setups before moving all your production nodes up to 6.11.

How to install:
  1. Ensure that either the pve-no-subscription or pvetest repository is set up correctly.
    You can do so via CLI text-editor or using the web UI under Node -> Repositories.
  2. Open a shell as root, e.g. through SSH or using the integrated shell on the web UI.
  3. apt update
  4. apt install proxmox-kernel-6.11
  5. reboot
Future updates to the 6.11 kernel will now be installed automatically when upgrading a node.

Please note:
  • The current 6.8 kernel is still supported and will stay the default kernel.
  • There were many changes, for improved hardware support and performance improvements all over the place.
    For a good overview of prominent changes, we recommend checking out the kernel-newbies site for 6.9, 6.10, and 6.11 (in progress).
  • For those depending on Realtek's r8125 out-of-tree driver, we also uploaded a newer r8125-dkms package in version 9.013.02-1~bpo12+1 to fix support for that driver when used with 6.8+ kernels.
  • The kernel is also available on the test and no-subscription repositories of Proxmox Backup Server and Proxmox Mail Gateway.
  • If you're unsure, we recommend continuing to use the 6.8-based kernel for now.

Feedback about how the new kernel performs in any of your setups is welcome!
Please provide basic details like CPU model, storage types used, ZFS as root file system, and the like, for both positive feedback or if you ran into some issues, where using the opt-in 6.11 kernel seems to be the likely cause.
Hey,
I want to update to this Kernel.
My machine has network issues with the Intel i217, which is not working.
I am new to Proxmox and just installed it, but there is no chance to use it without network connection.
No LED is blinking at all.
Is there a possibility to update without network connection? Via USB Stick?
Regards,
Christian
 
Is there a possibility to update without network connection? Via USB Stick?
Yes, for the most simple setups it's enough to download the package file from our CDN:

http://download.proxmox.com/debian/...kernel-6.11.0-2-pve-signed_6.11.0-2_amd64.deb

Verify the integrity of the package:
Code:
sha256sum proxmox-kernel-6.11.0-2-pve-signed_6.11.0-2_amd64.deb
8248f789626d89c1201c98033bff1c51fe9e18fd1ac93a4c6fea34b3245c0dee  proxmox-kernel-6.11.0-2-pve-signed_6.11.0-2_amd64.deb

Put that file then on a USB stick and mount that on the PVE. Then install the package using apt, e.g.:
Code:
apt install /path/to/usb-mount/proxmox-kernel-6.11.0-2-pve-signed_6.11.0-2_amd64.deb

If you use DKMS for out of tree kernel drivers you also need the respective headers package, in that case just download that too and add it to the apt command.
 
I have not. The iDRAC Service Module has not previously been necessary for proper operation on any previous version of Proxmox 7.x or 8.x, or any version of RHEL or compatibles. I can certainly give this a try. But this would also indicate a change in the kernel which is causing this to now be necessary for proper operation.
The 6.11.11-1-pve kernel has fixed this issue completely.
Thanks to all who provided possible fixes.
 
I will have to walk back my statement about the 6.11.11-1-pve kernel fixing *all* of my iDRAC issues. It did solve most of them, and the virtual terminal does work most of the time. There are still lots of issues with the main panel loading very slowly and there are certificate issues with the virtual terminal, with Firefox. Google Chrome gets around most of them because it always has certificate errors. I would have to say the iDRAC issues are still a "work in progress". If anyone has any suggestions or solutions, I would be happy to try them out.

We are a Dell shop, and as it stands, the 6.11 still has too many rough edges, even for testing.
 
Reporting a new issue with 6.11.11-1 - Proxmox is not reliable on this version as a glusterfs client. Intermittently, records are not written and files are not transferred via rsync between the client and the server. No errors logged on either the client or the server, so this is a "silent failure". Reverting the client Proxmox server back to 6.8.12-7 eliminates the problem.