[SOLVED] PVE9.1 installation - ISCSI boot problem/kernel panic

Merkos

New Member
Dec 1, 2025
2
1
3
Hi There,

From a bunch of information on this forum and other websites I managed to install PVE8.4 onto an iscsi based SAN.
Worked like a dream, but upgrading a node in the cluster to 9.1 or installing 9.1 from scratch results in a kernel panic.

The process I was using for 8.4 is as follows:

  1. Pre-setup and create the ZFS and iscsi LUNs, etc.
  2. Boot from the install ISO.
  3. Select graphical debug installation mode
  4. ctrl+d at the first pause.
  5. At the second pause,
    1. Setup an IP interface
    2. use dpkg to install libisns, libopeniscsiusr and open-iscsi
    3. mkdir -p /run/locl/iscsi
    4. Put The right initiator name into /etc/iscsi/initiatorname.iscsi
    5. start iscsid service
    6. iscsiadm discovery and login
    7. ctrl+d to continue with the install.
  6. After the installation....
    1. chroot into the new install
      1. configure iscsi, etc.
        echo "ISCSI_AUTO=true" > /etc/iscsi/iscsi.initramfs
        echo "ISCSI_IBFT=true" >> /etc/iscsi/iscsi.initramfs
        echo iscsi_ibft | tee -a /etc/initramfs-tools/modules
        echo iscsi_tcp | tee -a /etc/initramfs-tools/modules
        echo BOOT=iscsi >> /etc/initramfs-tools/initramfs.conf
        echo "InitiatorName=*****************" > /etc/iscsi/initiatorname.iscsi
      2. update grub and initramfs
  7. reboot


In PVE9.1 I tried the same process, but get a kernel panic of "attempted to kill init" right at the tail end of initramfs.
I've tried a number of variations/additions to the process to try and resolve this with no luck at all :(
I saw this thread:
But
"mkdir -p /var/lib/iscsi

iscsistart -b"
Didnt fix it either.

Panic trace:

Code:
[    9.458349] iBFT detected.
[    9.471336] Loading iSCSI transport class v2.0-870.
[    9.473231] usb 2-1: new high-speed USB device number 2 using ehci-pci
[    9.493925] iscsi: registered transport (tcp)
[    9.507742] Kernel panic - not syncing: Attempted to kill init! exitcode=0x00000200
[    9.516328] CPU: 47 UID: 0 PID: 1 Comm: init Not tainted 6.17.2-1-pve #1 PREEMPT(voluntary)
[    9.525748] Hardware name: Cisco Systems Inc UCSB-B200-M3/UCSB-B200-M3, BIOS B200M3.2.2.6h.0.110720191420 11/07/2019
[    9.537497] Call Trace:
[    9.540227]  <TASK>
[    9.542568]  dump_stack_lvl+0x5f/0x90
[    9.546658]  dump_stack+0x10/0x18
[    9.550357]  vpanic+0xda/0x2e0
[    9.553767]  panic+0x67/0x67
[    9.556981]  ? acct_update_integrals+0x4e/0x120
[    9.562042]  do_exit.cold+0x15/0x15
[    9.565935]  do_group_exit+0x34/0x90
[    9.569917]  __x64_sys_exit_group+0x18/0x20
[    9.574585]  x64_sys_call+0x232c/0x2330
[    9.578865]  do_syscall_64+0x80/0xa30
[    9.582952]  ? vfs_write+0x274/0x490
[    9.586944]  ? __f_unlock_pos+0x12/0x20
[    9.591227]  ? ksys_write+0x8d/0xf0
[    9.595113]  ? __x64_sys_write+0x19/0x30
[    9.599492]  ? x64_sys_call+0x79/0x2330
[    9.603770]  ? do_syscall_64+0xb8/0xa30
[    9.608050]  ? __f_unlock_pos+0x12/0x20
[    9.612331]  ? ksys_write+0x8d/0xf0
[    9.616223]  ? __x64_sys_write+0x19/0x30
[    9.620600]  ? x64_sys_call+0x79/0x2330
[    9.624879]  ? do_syscall_64+0xb8/0xa30
[    9.629159]  ? do_syscall_64+0xb8/0xa30
[    9.633439]  ? irqentry_exit+0x43/0x50
[    9.637624]  ? exc_page_fault+0x90/0x1b0
[    9.641992]  entry_SYSCALL_64_after_hwframe+0x76/0x7e
[    9.647630] RIP: 0033:0x782db94f6295
[    9.651621] Code: 69 9b 10 00 f7 d8 bd ff ff ff ff 64 89 02 eb c6 e8 20 f9 03 00 48 8b 35 51 9b 10 00 ba e7 00 00 00 eb 03 66 90 f4 89 d0 0f 05 <48> 3d 00 f0 ff ff 76 f3 f7 d8 64 89 06 eb ec 66 2e 0f 1f 84 00 00
[    9.672582] RSP: 002b:00007fff40c43c98 EFLAGS: 00000202 ORIG_RAX: 00000000000000e7
[    9.681034] RAX: ffffffffffffffda RBX: 000058c8a90af2a0 RCX: 0000782db94f6295
[    9.688999] RDX: 00000000000000e7 RSI: ffffffffffffff88 RDI: 0000000000000002
[    9.696963] RBP: 0000000000000004 R08: 00007fff40c43d80 R09: 00000000000000be
[    9.704929] R10: 0000000000000000 R11: 0000000000000202 R12: 0000000000000001
[    9.712893] R13: 00007fff40c44070 R14: 0000000000000000 R15: 000058c88533d8b8
[    9.720858]  </TASK>
[    9.723396] Kernel Offset: 0x2b800000 from 0xffffffff81000000 (relocation range: 0xffffffff80000000-0xffffffffbfffffff)
[    9.738790] pstore: backend (erst) writing error (-28)


Anyone have thoughts/experience or able offer encouraging words? :D
 
Thanks @bbgeek17.
'a' worked.
I had a feeling I was going to need to try that, but its a bit awkward to do in my setup - needed to create a new vm to be able to debootstrap it, etc.

I was hoping for an easy answer or something wrong in my process to avoid the vanilla deb path... But it did the trick.
 
  • Like
Reactions: bbgeek17
I just hit a similar panic during my ISO install of 9.1 on an iSCSI LUN. Should this really be marked as solved since the solutions are workarounds?
 
I just hit a similar panic during my ISO install of 9.1 on an iSCSI LUN. Should this really be marked as solved since the solutions are workarounds?
It should not.

We're in 2026 (now), and it's absolutely unacceptable that Proxmox can't be installed on a SAN/iSCSI/NVME-of LUN without jumping through hoops. And even if you manage to do it, there's this nagging feeling...is it gonna crash and burn at some point? During an update, maybe?

The Proxmox team should take lessons from the RHEL world, where installing to a SAN is as simple as clicking "Add Disk" and choosing your poison (iSCSI/NVME-of/etc). You don't have to modify a single thing other than choosing that disk for installation. And it's rock solid.
 
Sorry, these are workarounds, not solutions to the problem.
Hey @kapone, there is no need to apologize, we are not necessarily in disagreement. Given that the installation method discussed in this thread is not officially supported, all suggested approaches are indeed workarounds.

That said, this is a volunteer and community-driven support forum. Any proposal such as the one you mentioned in comment #6 is unlikely to be implemented based solely on discussion here.

If you believe there is a concrete business use case that would enhance PVE and merits prioritization over other work, you should submit it as a request in Bugzilla:
https://bugzilla.proxmox.com/

If accepted, it may eventually appear in the Roadmap:
https://pve.proxmox.com/wiki/Roadmap#Roadmap


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
That said, this is a volunteer and community-driven support forum. Any proposal such as the one you mentioned in comment #6 is unlikely to be implemented based solely on discussion here.
It wasn't a proposal, it was a dig directed at the Proxmox team. Unless they've been living under a rock...the SAN setup for the RHEL world has been out there for ages, I'm sure they did/could look at it. The fact that it isn't there already in Proxmox, tells me that the Proxmox team doesn't think it's important.

Fine. That's their decision, I don't expect them to take my word for it.