Opt-in Linux 6.11 Kernel for Proxmox VE 8 available on test & no-subscription

16 x AMD Opteron(tm) Processor 6380 (1 Socket)
PVE 8.2.7
ZFS
Serial Attached SCSI controller: Broadcom / LSI SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon] (rev 03)
Kernel 6.11.0-1-pve
VM: PBS 3.2-8 with kernel 6.11.0-1-pve
No issues so far.
 
  • Like
Reactions: t.lamprecht
I have tested various devices in my homelab for over a week now. I am mostly repurposing old hardware. It's all running nicely so far. I just love proxmox. Thank's for a great and free product.
  • i5-9500T, 5G USB LAN Adapter Realtek RTL8157 (using r8152), iGPU Passthrough
  • i7-3720QM, 2.5G USB LAN Adapter (using r8152)
  • virtualized on Celeron J4125 (using Synology VMM)
  • i5-8500, 5G USB LAN Adapter Realtek RTL8157 (using r8152), ZFS Pool, VirGL GPU, GPU Passthrough (AMD Radeon RX 6400)
I have noticed that the 5G USB LAN Adapters do not report their correct stats in ethtool, but they seem to run just fine.
 
  • Like
Reactions: t.lamprecht
The Linstor DRBD plugin for Proxmox does not compile under 6.11:


Code:
CC [M]  /var/lib/dkms/drbd/9.2.11-1/build/src/drbd/build-current/drbd_interval.o
/var/lib/dkms/drbd/9.2.11-1/build/src/drbd/drbd_receiver.c:291:8: warning: type qualifiers ignored on function return type [-Wignored-qualifiers]
  291 | static const struct sync_descriptor strategy_descriptor(enum sync_strategy strategy)
      |        ^~~~~
  CC [M]  /var/lib/dkms/drbd/9.2.11-1/build/src/drbd/build-current/drbd_state.o
/var/lib/dkms/drbd/9.2.11-1/build/src/drbd/drbd_main.c: In function ‘drbd_create_device’:
/var/lib/dkms/drbd/9.2.11-1/build/src/drbd/drbd_main.c:4117:28: error: ‘QUEUE_FLAG_STABLE_WRITES’ undeclared (first use in this function); did you mean ‘BLK_FEAT_STABLE_WRITES’?
 4117 |         blk_queue_flag_set(QUEUE_FLAG_STABLE_WRITES, disk->queue);
      |                            ^~~~~~~~~~~~~~~~~~~~~~~~
      |                            BLK_FEAT_STABLE_WRITES
/var/lib/dkms/drbd/9.2.11-1/build/src/drbd/drbd_main.c:4117:28: note: each undeclared identifier is reported only once for each function it appears in
/var/lib/dkms/drbd/9.2.11-1/build/src/drbd/drbd_main.c:4118:9: error: too many arguments to function ‘blk_queue_write_cache’
 4118 |         blk_queue_write_cache(disk->queue, true, true);
      |         ^~~~~~~~~~~~~~~~~~~~~
In file included from /var/lib/dkms/drbd/9.2.11-1/build/src/drbd/drbd_int.h:29,
                 from /var/lib/dkms/drbd/9.2.11-1/build/src/drbd/drbd_main.c:49:
./include/linux/blkdev.h:1322:20: note: declared here
 1322 | static inline bool blk_queue_write_cache(struct request_queue *q)
      |                    ^~~~~~~~~~~~~~~~~~~~~
/var/lib/dkms/drbd/9.2.11-1/build/src/drbd/drbd_nl.c: In function ‘decide_on_discard_support’:
/var/lib/dkms/drbd/9.2.11-1/build/src/drbd/drbd_nl.c:2080:9: error: implicit declaration of function ‘blk_queue_max_discard_sectors’; did you mean ‘bdev_max_discard_sectors’? [-Werror=implicit-function-declaration]
 2080 |         blk_queue_max_discard_sectors(q, max_discard_sectors);
      |         ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~
      |         bdev_max_discard_sectors
/var/lib/dkms/drbd/9.2.11-1/build/src/drbd/drbd_nl.c:2081:9: error: implicit declaration of function ‘blk_queue_max_write_zeroes_sectors’; did you mean ‘bdev_write_zeroes_sectors’? [-Werror=implicit-function-declaration]
 2081 |         blk_queue_max_write_zeroes_sectors(q, max_discard_sectors);
      |         ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
      |         bdev_write_zeroes_sectors
/var/lib/dkms/drbd/9.2.11-1/build/src/drbd/drbd_nl.c: In function ‘drbd_reconsider_queue_parameters’:
/var/lib/dkms/drbd/9.2.11-1/build/src/drbd/drbd_nl.c:2152:17: error: implicit declaration of function ‘disk_update_readahead’ [-Werror=implicit-function-declaration]
 2152 |                 disk_update_readahead(device->vdisk);
      |                 ^~~~~~~~~~~~~~~~~~~~~
make[2]: *** [scripts/Makefile.build:244: /var/lib/dkms/drbd/9.2.11-1/build/src/drbd/build-current/drbd_main.o] Error 1
make[2]: *** Waiting for unfinished jobs....
/var/lib/dkms/drbd/9.2.11-1/build/src/drbd/drbd_nl.c:2155:9: error: implicit declaration of function ‘blk_queue_max_hw_sectors’; did you mean ‘queue_max_hw_sectors’? [-Werror=implicit-function-declaration]
 2155 |         blk_queue_max_hw_sectors(q, common_limits.max_hw_sectors);
      |         ^~~~~~~~~~~~~~~~~~~~~~~~
      |         queue_max_hw_sectors
cc1: some warnings being treated as errors
make[2]: *** [scripts/Makefile.build:244: /var/lib/dkms/drbd/9.2.11-1/build/src/drbd/build-current/drbd_nl.o] Error 1
make[1]: *** [Makefile:1931: /var/lib/dkms/drbd/9.2.11-1/build/src/drbd/build-current] Error 2
make: *** [Makefile:248: kbuild] Error 2
make: Leaving directory '/var/lib/dkms/drbd/9.2.11-1/build/src/drbd'
 
I have a lot of SATA hickups on all 3 drives:
1732019722019.png

Code:
[86418.478158] ata5.00: exception Emask 0x0 SAct 0x3be000 SErr 0x50000 action 0x6 frozen
[86418.478165] ata5: SError: { PHYRdyChg CommWake }
[86418.478168] ata5.00: failed command: WRITE FPDMA QUEUED
[86418.478170] ata5.00: cmd 61/08:68:70:6e:e9/00:00:d4:00:00/40 tag 13 ncq dma 4096 out
                        res 40/00:01:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
[86418.478174] ata5.00: status: { DRDY }
[86418.478176] ata5.00: failed command: WRITE FPDMA QUEUED
[86418.478177] ata5.00: cmd 61/18:70:f8:39:56/00:00:0b:00:00/40 tag 14 ncq dma 12288 out
                        res 40/00:01:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
[86418.478181] ata5.00: status: { DRDY }
[86418.478182] ata5.00: failed command: WRITE FPDMA QUEUED
[86418.478183] ata5.00: cmd 61/08:78:e8:07:09/00:00:24:03:00/40 tag 15 ncq dma 4096 out
                        res 40/00:ff:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
[86418.478187] ata5.00: status: { DRDY }
[86418.478188] ata5.00: failed command: WRITE FPDMA QUEUED
[86418.478189] ata5.00: cmd 61/28:80:10:3a:56/00:00:0b:00:00/40 tag 16 ncq dma 20480 out
                        res 40/00:01:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
[86418.478193] ata5.00: status: { DRDY }
[86418.478194] ata5.00: failed command: WRITE FPDMA QUEUED
[86418.478195] ata5.00: cmd 61/10:88:38:3a:56/00:00:0b:00:00/40 tag 17 ncq dma 8192 out
                        res 40/00:01:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
[86418.478198] ata5.00: status: { DRDY }
[86418.478200] ata5.00: failed command: WRITE FPDMA QUEUED
[86418.478201] ata5.00: cmd 61/10:98:48:3a:56/00:00:0b:00:00/40 tag 19 ncq dma 8192 out
                        res 40/00:00:00:4f:c2/00:00:00:00:00/00 Emask 0x4 (timeout)
[86418.478204] ata5.00: status: { DRDY }
[86418.478205] ata5.00: failed command: WRITE FPDMA QUEUED
[86418.478206] ata5.00: cmd 61/20:a0:40:12:56/00:00:0b:00:00/40 tag 20 ncq dma 16384 out
                        res 40/00:01:00:4f:c2/00:00:00:00:00/00 Emask 0x4 (timeout)
[86418.478210] ata5.00: status: { DRDY }
[86418.478211] ata5.00: failed command: WRITE FPDMA QUEUED
[86418.478212] ata5.00: cmd 61/18:a8:58:3a:56/00:00:0b:00:00/40 tag 21 ncq dma 12288 out
                        res 40/00:01:01:4f:c2/00:00:00:00:00/00 Emask 0x4 (timeout)
[86418.478215] ata5.00: status: { DRDY }
[86418.478223] ata5: hard resetting link
[86418.944189] ata5: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
[86418.945787] ata5.00: configured for UDMA/133
[86418.945877] ata5: EH complete
[86559.850066] br-c92bc5541ad2: port 1(veth4bdc2fd) entered disabled state
[86559.850123] vethc3aea43: renamed from eth0
[86559.876142] br-c92bc5541ad2: port 1(veth4bdc2fd) entered disabled state
[86559.876423] veth4bdc2fd (unregistering): left allmulticast mode
[86559.876426] veth4bdc2fd (unregistering): left promiscuous mode
[86559.876428] br-c92bc5541ad2: port 1(veth4bdc2fd) entered disabled state
[86570.904523] br-c92bc5541ad2: port 1(vethd8f5bbb) entered blocking state
[86570.904529] br-c92bc5541ad2: port 1(vethd8f5bbb) entered disabled state
[86570.904537] vethd8f5bbb: entered allmulticast mode
[86570.904593] vethd8f5bbb: entered promiscuous mode
[86571.037445] eth0: renamed from vethc007969
[86571.037944] br-c92bc5541ad2: port 1(vethd8f5bbb) entered blocking state
[86571.037951] br-c92bc5541ad2: port 1(vethd8f5bbb) entered forwarding state
[86601.198113] ata6.00: exception Emask 0x0 SAct 0x820e SErr 0x50000 action 0x6 frozen
[86601.198120] ata6: SError: { PHYRdyChg CommWake }
[86601.198123] ata6.00: failed command: WRITE FPDMA QUEUED
[86601.198125] ata6.00: cmd 61/20:08:18:1e:56/00:00:0b:00:00/40 tag 1 ncq dma 16384 out
                        res 40/00:01:00:4f:c2/00:00:00:00:00/00 Emask 0x4 (timeout)
[86601.198130] ata6.00: status: { DRDY }
[86601.198132] ata6.00: failed command: WRITE FPDMA QUEUED
[86601.198133] ata6.00: cmd 61/08:10:70:32:08/00:00:9c:01:00/40 tag 2 ncq dma 4096 out
                        res 40/00:ff:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
[86601.198137] ata6.00: status: { DRDY }
[86601.198138] ata6.00: failed command: READ FPDMA QUEUED
[86601.198139] ata6.00: cmd 60/08:18:20:28:cf/00:00:1c:00:00/40 tag 3 ncq dma 4096 in
                        res 40/00:01:01:4f:c2/00:00:00:00:00/00 Emask 0x4 (timeout)
[86601.198143] ata6.00: status: { DRDY }
[86601.198144] ata6.00: failed command: WRITE FPDMA QUEUED
[86601.198145] ata6.00: cmd 61/08:48:10:9d:e9/00:00:d4:00:00/40 tag 9 ncq dma 4096 out
                        res 40/00:01:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
[86601.198149] ata6.00: status: { DRDY }
[86601.198150] ata6.00: failed command: WRITE FPDMA QUEUED
[86601.198151] ata6.00: cmd 61/18:78:00:1e:56/00:00:0b:00:00/40 tag 15 ncq dma 12288 out
                        res 40/00:01:06:4f:c2/00:00:00:00:00/00 Emask 0x4 (timeout)
[86601.198155] ata6.00: status: { DRDY }
[86601.198158] ata6: hard resetting link
[86601.662097] ata6: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
[86601.663610] ata6.00: configured for UDMA/133
[86601.663676] ata6: EH complete
[86661.986059] ata6.00: exception Emask 0x0 SAct 0x1c0f8 SErr 0x50000 action 0x6 frozen
[86661.986065] ata6: SError: { PHYRdyChg CommWake }
[86661.986068] ata6.00: failed command: WRITE FPDMA QUEUED
[86661.986069] ata6.00: cmd 61/28:18:c8:1f:56/00:00:0b:00:00/40 tag 3 ncq dma 20480 out
                        res 40/00:01:01:4f:c2/00:00:00:00:00/00 Emask 0x4 (timeout)
[86661.986074] ata6.00: status: { DRDY }
[86661.986075] ata6.00: failed command: WRITE FPDMA QUEUED
[86661.986076] ata6.00: cmd 61/28:20:f0:1f:56/00:00:0b:00:00/40 tag 4 ncq dma 20480 out
                        res 40/00:01:06:4f:c2/00:00:00:00:00/00 Emask 0x4 (timeout)
[86661.986080] ata6.00: status: { DRDY }
[86661.986081] ata6.00: failed command: WRITE FPDMA QUEUED
[86661.986082] ata6.00: cmd 61/08:28:58:33:08/00:00:9c:01:00/40 tag 5 ncq dma 4096 out
                        res 40/00:00:00:4f:c2/00:00:00:00:00/00 Emask 0x4 (timeout)
[86661.986086] ata6.00: status: { DRDY }
[86661.986087] ata6.00: failed command: WRITE FPDMA QUEUED
[86661.986088] ata6.00: cmd 61/10:30:28:10:9c/00:00:57:03:00/40 tag 6 ncq dma 8192 out
                        res 40/00:01:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
[86661.986091] ata6.00: status: { DRDY }
[86661.986093] ata6.00: failed command: WRITE FPDMA QUEUED
[86661.986093] ata6.00: cmd 61/20:38:18:20:56/00:00:0b:00:00/40 tag 7 ncq dma 16384 out
                        res 40/00:01:00:4f:c2/00:00:00:00:00/00 Emask 0x4 (timeout)
[86661.986097] ata6.00: status: { DRDY }
[86661.986098] ata6.00: failed command: WRITE FPDMA QUEUED
[86661.986099] ata6.00: cmd 61/30:70:38:20:56/00:00:0b:00:00/40 tag 14 ncq dma 24576 out
                        res 40/00:01:01:4f:c2/00:00:00:00:00/00 Emask 0x4 (timeout)
[86661.986102] ata6.00: status: { DRDY }
[86661.986104] ata6.00: failed command: WRITE FPDMA QUEUED
[86661.986105] ata6.00: cmd 61/30:78:68:20:56/00:00:0b:00:00/40 tag 15 ncq dma 24576 out
                        res 40/00:01:06:4f:c2/00:00:00:00:00/00 Emask 0x4 (timeout)
[86661.986108] ata6.00: status: { DRDY }
[86661.986109] ata6.00: failed command: WRITE FPDMA QUEUED
[86661.986110] ata6.00: cmd 61/30:80:98:20:56/00:00:0b:00:00/40 tag 16 ncq dma 24576 out
                        res 40/00:00:00:4f:c2/00:00:00:00:00/00 Emask 0x4 (timeout)
[86661.986113] ata6.00: status: { DRDY }
[86661.986121] ata6: hard resetting link
[86662.448072] ata6: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
[86662.450162] ata6.00: configured for UDMA/133
[86662.450240] ata6: EH complete
[92363.872847] ata6.00: exception Emask 0x0 SAct 0x60000cd0 SErr 0x50000 action 0x6 frozen
[92363.872856] ata6: SError: { PHYRdyChg CommWake }
[92363.872860] ata6.00: failed command: WRITE FPDMA QUEUED
[92363.872861] ata6.00: cmd 61/08:20:f0:11:10/00:00:d5:00:00/40 tag 4 ncq dma 4096 out
                        res 40/00:01:06:4f:c2/00:00:00:00:00/00 Emask 0x4 (timeout)
[92363.872867] ata6.00: status: { DRDY }
[92363.872868] ata6.00: failed command: WRITE FPDMA QUEUED
[92363.872869] ata6.00: cmd 61/08:30:b0:1e:56/00:00:0b:00:00/40 tag 6 ncq dma 4096 out
                        res 40/00:01:01:4f:c2/00:00:00:00:00/00 Emask 0x4 (timeout)
[92363.872873] ata6.00: status: { DRDY }
[92363.872874] ata6.00: failed command: WRITE FPDMA QUEUED
[92363.872876] ata6.00: cmd 61/18:38:40:12:56/00:00:0b:00:00/40 tag 7 ncq dma 12288 out
                        res 40/00:01:01:4f:c2/00:00:00:00:00/00 Emask 0x4 (timeout)
[92363.872879] ata6.00: status: { DRDY }
[92363.872880] ata6.00: failed command: READ FPDMA QUEUED
[92363.872881] ata6.00: cmd 60/80:50:88:9b:58/00:00:3e:01:00/40 tag 10 ncq dma 65536 in
                        res 40/00:00:00:4f:c2/00:00:00:00:00/00 Emask 0x4 (timeout)
[92363.872885] ata6.00: status: { DRDY }
[92363.872886] ata6.00: failed command: READ FPDMA QUEUED
[92363.872887] ata6.00: cmd 60/80:58:08:9d:58/00:00:3e:01:00/40 tag 11 ncq dma 65536 in
                        res 40/00:01:00:4f:c2/00:00:00:00:00/00 Emask 0x4 (timeout)
[92363.872891] ata6.00: status: { DRDY }
[92363.872892] ata6.00: failed command: WRITE FPDMA QUEUED
[92363.872893] ata6.00: cmd 61/08:e8:68:e3:aa/00:00:50:02:00/40 tag 29 ncq dma 4096 out
                        res 40/00:01:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
[92363.872897] ata6.00: status: { DRDY }
[92363.872898] ata6.00: failed command: READ FPDMA QUEUED
[92363.872899] ata6.00: cmd 60/80:f0:08:9c:58/00:00:3e:01:00/40 tag 30 ncq dma 65536 in
                        res 40/00:ff:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
[92363.872902] ata6.00: status: { DRDY }
[92363.872906] ata6: hard resetting link
[92364.337834] ata6: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
[92364.339359] ata6.00: configured for UDMA/133
[92364.339427] ata6: EH complete
[93483.852115] ata6.00: exception Emask 0x0 SAct 0x81800211 SErr 0x50000 action 0x6 frozen
[93483.852121] ata6: SError: { PHYRdyChg CommWake }
[93483.852123] ata6.00: failed command: READ FPDMA QUEUED
[93483.852124] ata6.00: cmd 60/80:00:38:f4:5a/00:00:3e:01:00/40 tag 0 ncq dma 65536 in
                        res 40/00:ff:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
[93483.852128] ata6.00: status: { DRDY }
[93483.852130] ata6.00: failed command: WRITE FPDMA QUEUED
[93483.852131] ata6.00: cmd 61/08:20:c8:10:7f/00:00:23:03:00/40 tag 4 ncq dma 4096 out
                        res 40/00:01:06:4f:c2/00:00:00:00:00/00 Emask 0x4 (timeout)
[93483.852134] ata6.00: status: { DRDY }
[93483.852135] ata6.00: failed command: WRITE FPDMA QUEUED
[93483.852136] ata6.00: cmd 61/10:48:c8:17:58/00:00:0b:00:00/40 tag 9 ncq dma 8192 out
                        res 40/00:01:01:4f:c2/00:00:00:00:00/00 Emask 0x4 (timeout)
[93483.852139] ata6.00: status: { DRDY }
[93483.852140] ata6.00: failed command: READ FPDMA QUEUED
[93483.852141] ata6.00: cmd 60/80:b8:38:f3:5a/00:00:3e:01:00/40 tag 23 ncq dma 65536 in
                        res 40/00:01:01:4f:c2/00:00:00:00:00/00 Emask 0x4 (timeout)
[93483.852144] ata6.00: status: { DRDY }
[93483.852145] ata6.00: failed command: READ FPDMA QUEUED
[93483.852145] ata6.00: cmd 60/80:c0:b8:f3:5a/00:00:3e:01:00/40 tag 24 ncq dma 65536 in
                        res 40/00:00:00:4f:c2/00:00:00:00:00/00 Emask 0x4 (timeout)
[93483.852148] ata6.00: status: { DRDY }
[93483.852149] ata6.00: failed command: WRITE FPDMA QUEUED
[93483.852150] ata6.00: cmd 61/08:f8:68:62:11/00:00:d5:00:00/40 tag 31 ncq dma 4096 out
                        res 40/00:01:01:4f:c2/00:00:00:00:00/00 Emask 0x4 (timeout)
[93483.852153] ata6.00: status: { DRDY }
[93483.852156] ata6: hard resetting link
[93484.317146] ata6: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
[93484.318968] ata6.00: configured for UDMA/133
[93484.319036] ata6: EH complete
[94344.056141] sh (3780940): drop_caches: 3

As broken SATA cables are unlikely (3?), I am testing the suggestion here and see if it helps:
https://bbs.archlinux.org/viewtopic.php?id=301065
 
Kernel 6.11 Messed up my IOMMU groups! So it is now useless; I noticed it when I had no networking on my Proxmox. I passed through 1 network interface to my OpenWRT VM for my router. That one worked, and the Proxmox did get on the internet. I rebooted with the old 6.8 kernel, and it worked fine. I did passthrough the another Ethernet nic. and had that one work on my OpenWRT VM. Now I am able to connect to the Proxmox GUI from my laptop. It turns out that all my network IOMMU's are in the same group on Kernel 6.11.

on 6.11:
Code:
root@pve:~# cat /proc/cmdline
initrd=\EFI\proxmox\6.11.0-1-pve\initrd.img-6.11.0-1-pve root=ZFS=rpool/ROOT/pve-1 boot=zfs intel_iommu=on iommu=pt pcie_aspm=off intremap=no_x2apic_optout
Code:
IOMMU Group 0 00:02.0 VGA compatible controller [0300]: Intel Corporation Haswell-ULT Integrated Graphics Controller [8086:0a16] (rev 0b)
IOMMU Group 1 00:00.0 Host bridge [0600]: Intel Corporation Haswell-ULT DRAM Controller [8086:0a04] (rev 0b)
IOMMU Group 2 00:03.0 Audio device [0403]: Intel Corporation Haswell-ULT HD Audio Controller [8086:0a0c] (rev 0b)
IOMMU Group 3 00:14.0 USB controller [0c03]: Intel Corporation 8 Series USB xHCI HC [8086:9c31] (rev 04)
IOMMU Group 4 00:16.0 Communication controller [0780]: Intel Corporation 8 Series HECI #0 [8086:9c3a] (rev 04)
IOMMU Group 5 00:1b.0 Audio device [0403]: Intel Corporation 8 Series HD Audio Controller [8086:9c20] (rev 04)
IOMMU Group 6 00:1c.0 PCI bridge [0604]: Intel Corporation 8 Series PCI Express Root Port 1 [8086:9c10] (rev e4)
IOMMU Group 6 00:1c.1 PCI bridge [0604]: Intel Corporation 8 Series PCI Express Root Port 2 [8086:9c12] (rev e4)
IOMMU Group 6 00:1c.2 PCI bridge [0604]: Intel Corporation 8 Series PCI Express Root Port 3 [8086:9c14] (rev e4)
IOMMU Group 6 00:1c.3 PCI bridge [0604]: Intel Corporation 8 Series PCI Express Root Port 4 [8086:9c16] (rev e4)
IOMMU Group 6 00:1c.4 PCI bridge [0604]: Intel Corporation 8 Series PCI Express Root Port 5 [8086:9c18] (rev e4)
IOMMU Group 6 01:00.0 Ethernet controller [0200]: Intel Corporation I211 Gigabit Network Connection [8086:1539] (rev 03)
IOMMU Group 6 02:00.0 Ethernet controller [0200]: Intel Corporation I211 Gigabit Network Connection [8086:1539] (rev 03)
IOMMU Group 6 03:00.0 Ethernet controller [0200]: Intel Corporation I211 Gigabit Network Connection [8086:1539] (rev 03)
IOMMU Group 6 04:00.0 Ethernet controller [0200]: Intel Corporation I211 Gigabit Network Connection [8086:1539] (rev 03)
IOMMU Group 6 05:00.0 Ethernet controller [0200]: Realtek Semiconductor Co., Ltd. RTL8125 2.5GbE Controller [10ec:8125] (rev 05)
IOMMU Group 7 00:1d.0 USB controller [0c03]: Intel Corporation 8 Series USB EHCI #1 [8086:9c26] (rev 04)
IOMMU Group 8 00:1f.0 ISA bridge [0601]: Intel Corporation 8 Series LPC Controller [8086:9c43] (rev 04)
IOMMU Group 8 00:1f.2 SATA controller [0106]: Intel Corporation 8 Series SATA Controller 1 [AHCI mode] [8086:9c03] (rev 04)
IOMMU Group 8 00:1f.3 SMBus [0c05]: Intel Corporation 8 Series SMBus Controller [8086:9c22] (rev 04)

on 6.8:
Code:
initrd=\EFI\proxmox\6.8.12-4-pve\initrd.img-6.8.12-4-pve root=ZFS=rpool/ROOT/pve-1 boot=zfs intel_iommu=on iommu=pt pcie_aspm=off intremap=no_x2apic_optout
Code:
IOMMU Group 0 00:02.0 VGA compatible controller [0300]: Intel Corporation Haswell-ULT Integrated Graphics Controller [8086:0a16] (rev 0b)
IOMMU Group 1 00:00.0 Host bridge [0600]: Intel Corporation Haswell-ULT DRAM Controller [8086:0a04] (rev 0b)
IOMMU Group 10 00:1c.4 PCI bridge [0604]: Intel Corporation 8 Series PCI Express Root Port 5 [8086:9c18] (rev e4)
IOMMU Group 11 00:1d.0 USB controller [0c03]: Intel Corporation 8 Series USB EHCI #1 [8086:9c26] (rev 04)
IOMMU Group 12 00:1f.0 ISA bridge [0601]: Intel Corporation 8 Series LPC Controller [8086:9c43] (rev 04)
IOMMU Group 12 00:1f.2 SATA controller [0106]: Intel Corporation 8 Series SATA Controller 1 [AHCI mode] [8086:9c03] (rev 04)
IOMMU Group 12 00:1f.3 SMBus [0c05]: Intel Corporation 8 Series SMBus Controller [8086:9c22] (rev 04)
IOMMU Group 13 01:00.0 Ethernet controller [0200]: Intel Corporation I211 Gigabit Network Connection [8086:1539] (rev 03)
IOMMU Group 14 02:00.0 Ethernet controller [0200]: Intel Corporation I211 Gigabit Network Connection [8086:1539] (rev 03)
IOMMU Group 15 03:00.0 Ethernet controller [0200]: Intel Corporation I211 Gigabit Network Connection [8086:1539] (rev 03)
IOMMU Group 16 04:00.0 Ethernet controller [0200]: Intel Corporation I211 Gigabit Network Connection [8086:1539] (rev 03)
IOMMU Group 17 05:00.0 Ethernet controller [0200]: Realtek Semiconductor Co., Ltd. RTL8125 2.5GbE Controller [10ec:8125] (rev 05)
IOMMU Group 2 00:03.0 Audio device [0403]: Intel Corporation Haswell-ULT HD Audio Controller [8086:0a0c] (rev 0b)
IOMMU Group 3 00:14.0 USB controller [0c03]: Intel Corporation 8 Series USB xHCI HC [8086:9c31] (rev 04)
IOMMU Group 4 00:16.0 Communication controller [0780]: Intel Corporation 8 Series HECI #0 [8086:9c3a] (rev 04)
IOMMU Group 5 00:1b.0 Audio device [0403]: Intel Corporation 8 Series HD Audio Controller [8086:9c20] (rev 04)
IOMMU Group 6 00:1c.0 PCI bridge [0604]: Intel Corporation 8 Series PCI Express Root Port 1 [8086:9c10] (rev e4)
IOMMU Group 7 00:1c.1 PCI bridge [0604]: Intel Corporation 8 Series PCI Express Root Port 2 [8086:9c12] (rev e4)
IOMMU Group 8 00:1c.2 PCI bridge [0604]: Intel Corporation 8 Series PCI Express Root Port 3 [8086:9c14] (rev e4)
IOMMU Group 9 00:1c.3 PCI bridge [0604]: Intel Corporation 8 Series PCI Express Root Port 4 [8086:9c16] (rev e4)
 
Only issue I am seeing so far, is that my PoweEdge R7615, Epyc 9174F, after bootup, the iDRAC says no-signal on the console. It's like I lost video out...

I got the 'Found volume group "pve" and the mount message for /dev/mapper/pve-root, and then I lost the console.

system is up and running though
 
Last edited:
Only issue I am seeing so far, is that my PoweEdge R7615, Epyc 9174F, after bootup, the iDRAC says no-signal on the console. It's like I lost video out...

I got the 'Found volume group "pve" and the mount message for /dev/mapper/pve-root, and then I lost the console.

system is up and running though

I do not know if it might be related in some way or at all, but what I noticed with/since the 6.11 kernel on my IPMI consoles (Supermicro with AST2600 and AST2400) is, that the resolution now switches to 1920x1200 during boot; instead of the 1280x1024 (or something like that) before.
 
  • Like
Reactions: carl0s
Thanks - confirms the findings linked in https://bugzilla.proxmox.com/show_bug.cgi?id=5926 as well

I sent a patch that cherry-picks the change:
https://lore.proxmox.com/pve-devel/.../T/#maa3f4c3d408f4918e1061ae5da303739d022a2eb

so it should be fixed in one of the next 6.11 proxmox-kernels
 
  • Like
Reactions: dakralex
Awesome. I'll keep an eye on https://git.proxmox.com/?p=pve-kernel.git and apt search proxmox-kernel-6.11 for updates. For now 6.8.12-4-pve is working fine.
We plan on getting a new 6.11 kernel out soon - it's a bit depending on our upstream (Ubuntu), and their release-cycle as well - but I assume it will be within the next 2 weeks (feedback if it's working for you would be appreciated!)
 
Only issue I am seeing so far, is that my PoweEdge R7615, Epyc 9174F, after bootup, the iDRAC says no-signal on the console. It's like I lost video out...

I got the 'Found volume group "pve" and the mount message for /dev/mapper/pve-root, and then I lost the console.

system is up and running though
on a hunch - you could try adding `nomodeset` to the kernel commanline (see https://pve.proxmox.com/pve-docs/chapter-sysadmin.html#sysboot_edit_kernel_cmdline)
and in any case - please check the journal after booting for any errors (if there is something significant you can open a new thread and mention my username (@Stoiko Ivanov ) so I can take a look
 
  • Like
Reactions: carl0s

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!