No boot after 6.8.1

mircot80

New Member
Feb 13, 2024
13
2
3
Hi everyone.
I have a problem with my Proxmox, everything worked fine and yesterday, after some time, I installed the updates and from that moment Proxmox started to stop responding with a black screen, when I turned it off and on again it worked for a while and then it stop responding again , pve-manager 8.1.10 Linux 6.5.13-5-pve .
Being a masochist, I decided to update the kernel to version 6.8.1-1-pve and it no longer started:

error01.jpg

Booting into "Advanced options for Proxmox VE GNU/Linux" I get this error:

error02.jpg

The strange thing is that now if I boot with Linux 6.5.13-5-pve everything works.

I ask you:
1) Is there anything I can try to fix the problem?
2) how can I set it to boot automatically with the 6.5.13-5-pve kernel?

Thank you
 
if it can help:

Code:
Disk /dev/loop0: 100 GiB, 107374182400 bytes, 209715200 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/nvme0n1: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors
Disk model: Samsung SSD 990 PRO 1TB
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 2427D65D-FC3D-4BAC-AB1E-712BDFC9E4E8

Device              Start        End    Sectors   Size Type
/dev/nvme0n1p1       2048    1050623    1048576   512M EFI System
/dev/nvme0n1p2    1050624 1951522815 1950472192 930.1G Linux filesystem
/dev/nvme0n1p3 1951522816 1953523711    2000896   977M Linux swap
 
Hi,

Have you tried to update the firmware/BIOS UEFI any other PCI driver?

Could you please check the syslog from last failed boot?

Do you have GPU added to your server?

Could you post the output of `pveversion -v` command when you boot from an older Proxmox kernel?
 
I haven't checked for bios updates.

Where do I find syslog from last failed boot?

Yes, Proxmox works with the integrated Intel video card and I have 2 other GPUs.

Code:
pveversion -v
proxmox-ve: 8.2.0 (running kernel: 6.5.13-5-pve)
pve-manager: 8.2.2 (running version: 8.2.2/9355359cd7afbae4)
proxmox-kernel-helper: 8.1.0
proxmox-kernel-6.8: 6.8.4-2
proxmox-kernel-6.8.4-2-pve-signed: 6.8.4-2
proxmox-kernel-6.8.1-1-pve-signed: 6.8.1-1
proxmox-kernel-6.5.13-5-pve-signed: 6.5.13-5
proxmox-kernel-6.5: 6.5.13-5
proxmox-kernel-6.5.13-1-pve-signed: 6.5.13-1
proxmox-kernel-6.5.11-8-pve-signed: 6.5.11-8
ceph-fuse: 16.2.11+ds-2
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown: residual config
ifupdown2: 3.2.0-1+pmx8
intel-microcode: 3.20231114.1~deb12u1
libjs-extjs: 7.0.0-4
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.1
libproxmox-backup-qemu0: 1.4.1
libproxmox-rs-perl: 0.3.3
libpve-access-control: 8.1.4
libpve-apiclient-perl: 3.3.2
libpve-cluster-api-perl: 8.0.6
libpve-cluster-perl: 8.0.6
libpve-common-perl: 8.2.1
libpve-guest-common-perl: 5.1.1
libpve-http-server-perl: 5.1.0
libpve-network-perl: 0.9.8
libpve-rs-perl: 0.8.8
libpve-storage-perl: 8.2.1
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 6.0.0-1
lxcfs: 6.0.0-pve2
novnc-pve: 1.4.0-3
proxmox-backup-client: 3.2.2-1
proxmox-backup-file-restore: 3.2.2-1
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.2.3
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.6
proxmox-widget-toolkit: 4.2.3
pve-cluster: 8.0.6
pve-container: 5.0.11
pve-docs: 8.2.2
pve-edk2-firmware: not correctly installed
pve-esxi-import-tools: 0.7.0
pve-firewall: 5.0.6
pve-firmware: 3.11-1
pve-ha-manager: 4.0.4
pve-i18n: 3.2.2
pve-qemu-kvm: 8.1.5-6
pve-xtermjs: 5.3.0-3
qemu-server: 8.2.1
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.3-pve2
 
The error tells simply, that it cant mount the root partition:
/dev/nvme0n1p2

I don't know if its an dm partition or lvm or anything, because you didnt provided /etc/fstab.
However UUID thats used in your fstab (or the corresponding initramfs) seems to be wrong or the disk itself can't be found.

Thats an nvme driver, so no storage controllers are involved, which means your initramfs should have no troubles to boot, since the nvme is directly attached to the cpu.
Either the UUID is wrong, or fstab is wrong, or you need simply too boot somehow and regenerate initramfs, or in worst case your ssd died.
Thats at least what i think.

If you manage to boot somehow, provide the output of blkid and your fstab. It could be even wrong cmdline.
Prove the output of "cat /proc/cmdline" either.
Cheers
 
In my opinion it is not damaged SSD because with kernel 6.5 everything works fine.

Code:
sudo blkid
/dev/sdc1: UUID="a7c00a8c-8a6e-4658-b305-15305cad6a8c" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="ae134ffe-c412-488a-8186-aa5584abb7c7"
/dev/loop1: UUID="a1eac131-d20e-42a7-bea4-b633adac528b" BLOCK_SIZE="4096" TYPE="ext4"
/dev/nvme0n1p1: UUID="504b8af4-4af6-4e35-a957-f23a2d5e8992" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="a5e06ead-7d11-4d40-9644-c70ec31dc0a1"
/dev/sdb1: UUID="249dfe2c-18fd-4928-9f14-37aad6fd3cfe" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="f705ffbd-e111-40dc-8cfe-d8ad40f90610"
/dev/loop2: UUID="51c6bac5-27ab-4d04-827b-d2ca507a6483" BLOCK_SIZE="4096" TYPE="ext4"
/dev/loop0: UUID="92e4f98c-e6f0-466f-a175-062deeaded75" BLOCK_SIZE="4096" TYPE="ext4"
/dev/nvme1n1p2: UUID="e1fca195-3f22-4cd2-90b8-e7c32a3d9b85" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="9ea58d16-47e7-4054-b6b1-a7c037d255ee"
/dev/nvme1n1p3: UUID="3169522a-045f-4d27-8bdc-dfc3847b831f" TYPE="swap" PARTUUID="9da43fa4-c16c-4644-bb82-102cf3f54817"
/dev/nvme1n1p1: UUID="213A-4968" BLOCK_SIZE="512" TYPE="vfat" PARTUUID="9dd8ea4e-b2e6-4a66-83b3-9f9efb82426e"
/dev/sda1: UUID="0a1fc0c9-be2c-48d5-b212-e9c1bf73e345" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="706ab7db-fa81-40a2-a97d-c1b0eafa497f"

Code:
# <file system> <mount point>   <type>  <options>       <dump>  <pass>
# / was on /dev/nvme0n1p2 during installation
UUID=e1fca195-3f22-4cd2-90b8-e7c32a3d9b85 /               ext4    errors=remount-ro 0       1
# /boot/efi was on /dev/nvme0n1p1 during installation
UUID=213A-4968  /boot/efi       vfat    umask=0077      0       1
# swap was on /dev/nvme0n1p3 during installation
UUID=3169522a-045f-4d27-8bdc-dfc3847b831f none            swap    sw              0       0
/dev/disk/by-id/wwn-0x5000cca284e2ece3 /mnt/wwn-0x5000cca284e2ece3 auto nosuid,nodev,nofail,noauto 0 0
/dev/disk/by-id/wwn-0x5000c500dbb42529 /mnt/wwn-0x5000c500dbb42529 auto nosuid,nodev,nofail,noauto 0 0
/dev/disk/by-id/wwn-0x5000c500e0cba7fe /mnt/wwn-0x5000c500e0cba7fe auto nosuid,nodev,nofail,noauto 0 0
/dev/disk/by-id/wwn-0x5000c500c9008b02 /mnt/wwn-0x5000c500c9008b02 auto nosuid,nodev,nofail,noauto 0 0
/dev/disk/by-id/wwn-0x5000c500e4c8bf2d /mnt/wwn-0x5000c500e4c8bf2d auto nosuid,nodev,nofail,noauto 0 0
/dev/disk/by-id/wwn-0x5000c500e65d5924 /mnt/wwn-0x5000c500e65d5924 auto nosuid,nodev,nofail,noauto 0 0
/dev/disk/by-id/wwn-0x5000c500ecc1e231 /mnt/wwn-0x5000c500ecc1e231 auto nosuid,nodev,nofail,noauto 0 0
/dev/disk/by-id/wwn-0x5000c500c98f8c82 /mnt/wwn-0x5000c500c98f8c82 auto nosuid,nodev,nofail,noauto 0 0
 
  • Like
Reactions: Ramalama
In my opinion it is not damaged SSD because with kernel 6.5 everything works fine.

Code:
sudo blkid
/dev/sdc1: UUID="a7c00a8c-8a6e-4658-b305-15305cad6a8c" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="ae134ffe-c412-488a-8186-aa5584abb7c7"
/dev/loop1: UUID="a1eac131-d20e-42a7-bea4-b633adac528b" BLOCK_SIZE="4096" TYPE="ext4"
/dev/nvme0n1p1: UUID="504b8af4-4af6-4e35-a957-f23a2d5e8992" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="a5e06ead-7d11-4d40-9644-c70ec31dc0a1"
/dev/sdb1: UUID="249dfe2c-18fd-4928-9f14-37aad6fd3cfe" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="f705ffbd-e111-40dc-8cfe-d8ad40f90610"
/dev/loop2: UUID="51c6bac5-27ab-4d04-827b-d2ca507a6483" BLOCK_SIZE="4096" TYPE="ext4"
/dev/loop0: UUID="92e4f98c-e6f0-466f-a175-062deeaded75" BLOCK_SIZE="4096" TYPE="ext4"
/dev/nvme1n1p2: UUID="e1fca195-3f22-4cd2-90b8-e7c32a3d9b85" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="9ea58d16-47e7-4054-b6b1-a7c037d255ee"
/dev/nvme1n1p3: UUID="3169522a-045f-4d27-8bdc-dfc3847b831f" TYPE="swap" PARTUUID="9da43fa4-c16c-4644-bb82-102cf3f54817"
/dev/nvme1n1p1: UUID="213A-4968" BLOCK_SIZE="512" TYPE="vfat" PARTUUID="9dd8ea4e-b2e6-4a66-83b3-9f9efb82426e"
/dev/sda1: UUID="0a1fc0c9-be2c-48d5-b212-e9c1bf73e345" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="706ab7db-fa81-40a2-a97d-c1b0eafa497f"

Code:
# <file system> <mount point>   <type>  <options>       <dump>  <pass>
# / was on /dev/nvme0n1p2 during installation
UUID=e1fca195-3f22-4cd2-90b8-e7c32a3d9b85 /               ext4    errors=remount-ro 0       1
# /boot/efi was on /dev/nvme0n1p1 during installation
UUID=213A-4968  /boot/efi       vfat    umask=0077      0       1
# swap was on /dev/nvme0n1p3 during installation
UUID=3169522a-045f-4d27-8bdc-dfc3847b831f none            swap    sw              0       0
/dev/disk/by-id/wwn-0x5000cca284e2ece3 /mnt/wwn-0x5000cca284e2ece3 auto nosuid,nodev,nofail,noauto 0 0
/dev/disk/by-id/wwn-0x5000c500dbb42529 /mnt/wwn-0x5000c500dbb42529 auto nosuid,nodev,nofail,noauto 0 0
/dev/disk/by-id/wwn-0x5000c500e0cba7fe /mnt/wwn-0x5000c500e0cba7fe auto nosuid,nodev,nofail,noauto 0 0
/dev/disk/by-id/wwn-0x5000c500c9008b02 /mnt/wwn-0x5000c500c9008b02 auto nosuid,nodev,nofail,noauto 0 0
/dev/disk/by-id/wwn-0x5000c500e4c8bf2d /mnt/wwn-0x5000c500e4c8bf2d auto nosuid,nodev,nofail,noauto 0 0
/dev/disk/by-id/wwn-0x5000c500e65d5924 /mnt/wwn-0x5000c500e65d5924 auto nosuid,nodev,nofail,noauto 0 0
/dev/disk/by-id/wwn-0x5000c500ecc1e231 /mnt/wwn-0x5000c500ecc1e231 auto nosuid,nodev,nofail,noauto 0 0
/dev/disk/by-id/wwn-0x5000c500c98f8c82 /mnt/wwn-0x5000c500c98f8c82 auto nosuid,nodev,nofail,noauto 0 0
Hummh, thats looking all great.
Is there any reason you mount your disks to the /mnt folder?, i mean since you do "0 0" it doesn't matter anyway and is not the issue why it doesn't boot.
But it looks somehow not really correct to me xD

However, thats all fine.
Im just confused, on one thing.
In your fstab youre using /dev/nvme1n1p2 as root partition, but some post prior it was /dev/nvme0n1p2
you could try to exchange:
UUID=e1fca195-3f22-4cd2-90b8-e7c32a3d9b85 / ext4 errors=remount-ro 0 1
with:
/dev/nvme1n1p2 / ext4 errors=remount-ro 0 1

But the UUID Should actually be the better approach, like it is actually.

Otherwise there is this:
/dev/nvme0n1p1: UUID="504b8af4-4af6-4e35-a957-f23a2d5e8992" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="a5e06ead-7d11-4d40-9644-c70ec31dc0a1"

Which is ext4 either, maybe a root partition or something? its confusing, because you have /dev/nvme0n1p1 with an ext4 partition and an /dev/nvme1n1p2 with an ext4 partition.
But the /dev/nvme1 seems to me like its correct, because it has an boot and swap partition either, and the /dev/nvme0 has only an ext4 partition.
Its just confusing, because some posts ago you mentioned /dev/nvme0n1p2

However, could you provide cmdline either?
 
Last edited:
I have always mounted them in mnt, what should I do?

You're right, it's strange, I don't know what happened, I thought nvme0n1 was the main ssd.
I have a Samsung 990 pro for proxmox and some virtual machines and then I have another Samsung 990 for 2 other virtual machines.

cmdline ? What I have to do?

Code:
sudo lsblk
NAME        MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
loop0         7:0    0   100G  0 loop
loop1         7:1    0   500G  0 loop
loop2         7:2    0  15.5T  0 loop
sda           8:0    0   2.7T  0 disk
└─sda1        8:1    0   2.7T  0 part /mnt/pve/hdd3TB
sdb           8:16   0   1.8T  0 disk
└─sdb1        8:17   0   1.8T  0 part /mnt/pve/hdd2TB
sdc           8:32   0  14.6T  0 disk
└─sdc1        8:33   0  14.6T  0 part /mnt/pve/hdd16TB
nvme1n1     259:0    0 931.5G  0 disk
├─nvme1n1p1 259:2    0   512M  0 part /boot/efi
├─nvme1n1p2 259:3    0 930.1G  0 part /
└─nvme1n1p3 259:4    0   977M  0 part [SWAP]
nvme0n1     259:1    0 931.5G  0 disk
└─nvme0n1p1 259:5    0 931.5G  0 part /mnt/pve/nvme1TB_2

Code:
sudo fdisk -l
Disk /dev/loop0: 100 GiB, 107374182400 bytes, 209715200 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/loop1: 500 GiB, 536870912000 bytes, 1048576000 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/loop2: 15.53 TiB, 17072495001600 bytes, 33344716800 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/nvme1n1: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors
Disk model: Samsung SSD 990 PRO 1TB
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 2427D65D-FC3D-4BAC-AB1E-712BDFC9E4E8

Device              Start        End    Sectors   Size Type
/dev/nvme1n1p1       2048    1050623    1048576   512M EFI System
/dev/nvme1n1p2    1050624 1951522815 1950472192 930.1G Linux filesystem
/dev/nvme1n1p3 1951522816 1953523711    2000896   977M Linux swap


Disk /dev/nvme0n1: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors
Disk model: Samsung SSD 990 PRO 1TB
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 12B8B490-FB4B-49A8-9617-7BD307713ADA

Device         Start        End    Sectors   Size Type
/dev/nvme0n1p1  2048 1953525134 1953523087 931.5G Linux filesystem


Disk /dev/sda: 2.73 TiB, 3000592982016 bytes, 5860533168 sectors
Disk model: APPLE HDD ST3000
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 703C5702-20C6-4A86-88CF-78AB9A41F4B9

Device     Start        End    Sectors  Size Type
/dev/sda1   2048 5860533134 5860531087  2.7T Linux filesystem


Disk /dev/sdb: 1.82 TiB, 2000398934016 bytes, 3907029168 sectors
Disk model: WDC WD20EFRX-68E
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: BA69D752-7F3A-4C3E-9365-2068C4D1AF8D

Device     Start        End    Sectors  Size Type
/dev/sdb1   2048 3907029134 3907027087  1.8T Linux filesystem


Disk /dev/sdc: 14.55 TiB, 16000900661248 bytes, 31251759104 sectors
Disk model: ST16000NM000J-2T
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: B7FC1577-B538-4387-8099-56DEE1EC3AC4

Device     Start         End     Sectors  Size Type
/dev/sdc1   2048 31251759070 31251757023 14.6T Linux filesystem
 
I have always mounted them in mnt, what should I do?

You're right, it's strange, I don't know what happened, I thought nvme0n1 was the main ssd.
I have a Samsung 990 pro for proxmox and some virtual machines and then I have another Samsung 990 for 2 other virtual machines.

cmdline ? What I have to do?

Code:
sudo lsblk
NAME        MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
loop0         7:0    0   100G  0 loop
loop1         7:1    0   500G  0 loop
loop2         7:2    0  15.5T  0 loop
sda           8:0    0   2.7T  0 disk
└─sda1        8:1    0   2.7T  0 part /mnt/pve/hdd3TB
sdb           8:16   0   1.8T  0 disk
└─sdb1        8:17   0   1.8T  0 part /mnt/pve/hdd2TB
sdc           8:32   0  14.6T  0 disk
└─sdc1        8:33   0  14.6T  0 part /mnt/pve/hdd16TB
nvme1n1     259:0    0 931.5G  0 disk
├─nvme1n1p1 259:2    0   512M  0 part /boot/efi
├─nvme1n1p2 259:3    0 930.1G  0 part /
└─nvme1n1p3 259:4    0   977M  0 part [SWAP]
nvme0n1     259:1    0 931.5G  0 disk
└─nvme0n1p1 259:5    0 931.5G  0 part /mnt/pve/nvme1TB_2

Code:
sudo fdisk -l
Disk /dev/loop0: 100 GiB, 107374182400 bytes, 209715200 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/loop1: 500 GiB, 536870912000 bytes, 1048576000 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/loop2: 15.53 TiB, 17072495001600 bytes, 33344716800 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/nvme1n1: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors
Disk model: Samsung SSD 990 PRO 1TB
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 2427D65D-FC3D-4BAC-AB1E-712BDFC9E4E8

Device              Start        End    Sectors   Size Type
/dev/nvme1n1p1       2048    1050623    1048576   512M EFI System
/dev/nvme1n1p2    1050624 1951522815 1950472192 930.1G Linux filesystem
/dev/nvme1n1p3 1951522816 1953523711    2000896   977M Linux swap


Disk /dev/nvme0n1: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors
Disk model: Samsung SSD 990 PRO 1TB
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 12B8B490-FB4B-49A8-9617-7BD307713ADA

Device         Start        End    Sectors   Size Type
/dev/nvme0n1p1  2048 1953525134 1953523087 931.5G Linux filesystem


Disk /dev/sda: 2.73 TiB, 3000592982016 bytes, 5860533168 sectors
Disk model: APPLE HDD ST3000
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 703C5702-20C6-4A86-88CF-78AB9A41F4B9

Device     Start        End    Sectors  Size Type
/dev/sda1   2048 5860533134 5860531087  2.7T Linux filesystem


Disk /dev/sdb: 1.82 TiB, 2000398934016 bytes, 3907029168 sectors
Disk model: WDC WD20EFRX-68E
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: BA69D752-7F3A-4C3E-9365-2068C4D1AF8D

Device     Start        End    Sectors  Size Type
/dev/sdb1   2048 3907029134 3907027087  1.8T Linux filesystem


Disk /dev/sdc: 14.55 TiB, 16000900661248 bytes, 31251759104 sectors
Disk model: ST16000NM000J-2T
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: B7FC1577-B538-4387-8099-56DEE1EC3AC4

Device     Start         End     Sectors  Size Type
/dev/sdc1   2048 31251759070 31251757023 14.6T Linux filesystem
Prove the output of "cat /proc/cmdline" either.
I wrote that in my previous post xD

I have always mounted them in mnt, what should I do?
-> Depends what you use them for, if for example for ZFS Raid10, since its 8 disks? I don't know, usually with 8 disks i would do some sort of one Pool, instead of using them as they are directly in folders.

However, step by step, first i need to see the cmdline, maybe there is something weird in it.
Otherwise you could try to replace the UUID with PARTUUID for the root partition, i doubt that that fixes it, but worth a try anyway. Or use /dev/nvme1n1p2, but im not sure if the kernel (at boot time) sees your ssd as nvme0n1p2 or nvme1n1p2, so in worst case you will break the booting even with your old kernel. But PARTUUID should be still safe to try, /dev/nvme1n1p2 is probably unsafe but could actually boot in the new kernel.

You can exchange UUID=.... with PARTUUID=9ea58d16-47e7-4054-b6b1-a7c037d255ee in /etc/fstab
However like i mentioned, i doubt that it will help, its just a try.

Lets see the cmdline first anyway
 
There he is:
Code:
cat /proc/cmdline
BOOT_IMAGE=/boot/vmlinuz-6.5.13-5-pve root=UUID=e1fca195-3f22-4cd2-90b8-e7c32a3d9b85 ro quiet intel_iommu=on iommu=pt

Thank you
 
  • Like
Reactions: Ramalama
There he is:
Code:
cat /proc/cmdline
BOOT_IMAGE=/boot/vmlinuz-6.5.13-5-pve root=UUID=e1fca195-3f22-4cd2-90b8-e7c32a3d9b85 ro quiet intel_iommu=on iommu=pt

Thank you
Thats absolutely perfect, tbh, everything looks correct!

I have tbh no clue why the 6.8 kernel can't find your root partition.
Maybe the initrd is somehow broken, did you tryed to rebuild it with update-initramfs -u -k all
Otherwise maybe the 6.8.1 kernel is really broken for your CPU??? Since there is no driver/module needed for an NVME Root disk.
Im Sorry, cheers

Do after the update-initramfs -u -k all
an proxmox-boot-tool refresh
 
Last edited:
Did you manage to solve this problem?
My Proxmox stop booting with the same error on IntelNUC11 PC. Before it works just fine. Nothing changed in BIOS.
So I reinstalled Debian 12 and just after installing Proxmox package with default kernel and rebooting - same Error.
 

Attachments

  • 001.jpeg
    001.jpeg
    124.8 KB · Views: 7
Did you manage to solve this problem?
My Proxmox stop booting with the same error on IntelNUC11 PC. Before it works just fine. Nothing changed in BIOS.
So I reinstalled Debian 12 and just after installing Proxmox package with default kernel and rebooting - same Error.

Have you solved it?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!