Disk mount issues with 6.14.8-2-pve kernel after upgrading to PVE9

andymdoyle

New Member
Aug 11, 2025
2
0
1
I have a mini PC that has been happily running PVE for over a year, and recently has been on the latest release of PVE8 and was using 6.8.12-13-pve most recently.

Today I did an in-place upgrade to PVE9 by following https://pve.proxmox.com/wiki/Upgrade_from_8_to_9 and all went well through the upgrade process until time came to reboot. At booting I was presented with the following:

r/Proxmox - Disk mount issues with 6.14.8-2-pve kernel after upgrading to PVE9
Booting with advanced mode and picking the 6.14.8-2-pve recovery option led to the same errors plus a constant stream of messages like:

r/Proxmox - Disk mount issues with 6.14.8-2-pve kernel after upgrading to PVE9
At any point if I rebooted back in to 6.8.12-13-pve the system booted normally.

fstab looks like this:

#
/dev/pve/root / ext4 errors=remount-ro 0 1
UUID=4C18-6924 /boot/efi vfat defaults 0 1
/dev/pve/swap none swap sw 0 0
proc /proc proc defaults 0 0



And I attempted to boot by changing UUID=4C18-6924 to /dev/sda2 as per lsblk output:

NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 476.9G 0 disk
├─sda1 8:1 0 1007K 0 part
├─sda2 8:2 0 1G 0 part /boot/efi
└─sda3 8:3 0 475.9G 0 part
├─pve-swap 252:0 0 8G 0 lvm [SWAP]
├─pve-root 252:1 0 96G 0 lvm /
├─pve-data_tmeta 252:2 0 3.6G 0 lvm
│ └─pve-data-tpool 252:4 0 348.8G 0 lvm
│ ├─pve-data 252:5 0 348.8G 1 lvm
│ ├─pve-vm--103--disk--0 252:6 0 4G 0 lvm
│ ├─pve-vm--104--disk--0 252:7 0 2G 0 lvm
│ ├─pve-vm--101--disk--0 252:8 0 5G 0 lvm
│ ├─pve-vm--102--disk--0 252:9 0 8G 0 lvm
│ ├─pve-vm--107--disk--0 252:10 0 10G 0 lvm
│ ├─pve-vm--106--disk--0 252:11 0 4M 0 lvm
│ ├─pve-vm--106--disk--1 252:12 0 32G 0 lvm
│ └─pve-vm--108--disk--0 252:13 0 8G 0 lvm
└─pve-data_tdata 252:3 0 348.8G 0 lvm
└─pve-data-tpool 252:4 0 348.8G 0 lvm
├─pve-data 252:5 0 348.8G 1 lvm
├─pve-vm--103--disk--0 252:6 0 4G 0 lvm
├─pve-vm--104--disk--0 252:7 0 2G 0 lvm
├─pve-vm--101--disk--0 252:8 0 5G 0 lvm
├─pve-vm--102--disk--0 252:9 0 8G 0 lvm
├─pve-vm--107--disk--0 252:10 0 10G 0 lvm
├─pve-vm--106--disk--0 252:11 0 4M 0 lvm
├─pve-vm--106--disk--1 252:12 0 32G 0 lvm
└─pve-vm--108--disk--0 252:13 0 8G 0 lvm

Filesystem Size Used Avail Use% Mounted on
udev 8.3G 0 8.3G 0% /dev
tmpfs 1.7G 2.8M 1.7G 1% /run
/dev/mapper/pve-root 101G 24G 72G 25% /
tmpfs 8.3G 36M 8.3G 1% /dev/shm
efivarfs 197k 90k 102k 47% /sys/firmware/efi/efivars
tmpfs 5.3M 0 5.3M 0% /run/lock
tmpfs 1.1M 0 1.1M 0% /run/credentials/systemd-journald.service
tmpfs 8.3G 0 8.3G 0% /tmp
/dev/sda2 1.1G 9.2M 1.1G 1% /boot/efi
/dev/fuse 135M 46k 135M 1% /etc/pve



Any ideas what is causing this with the latest kernel?
 
the error messages look like hardware issues, but maybe your system is buggy on the new kernel - those mini PCs very often don't run particularly stable unfortunately.. I'd double check the memory and disks.
 
The hardware has been stable for ~2 years and has had no issues with Windows and Ubuntu previously. Proxmox 8 was solid since installed a year or so ago. It's just this latest kernel that is sending it all over the place.

Anyone else experiencing issues with 6.14?
 
The hardware has been stable for ~2 years and has had no issues with Windows and Ubuntu previously. Proxmox 8 was solid since installed a year or so ago. It's just this latest kernel that is sending it all over the place.

Anyone else experiencing issues with 6.14?
Hi, I have 3 of these N100 (Firebat T8 Plus) mini PCs in a cluster at home.
Only one of them exhibits the issue with the hard drive with kernel 6.14.
Either it can not find the drive to continue booting after GRUB, or it boots up normally and after a couple of hours it locks up (ATA hard resets iirc) and needs a reboot.

The other two units work flawlessly with 6.14 from the day I upgraded to PVE9.

The units were bought at different times and the unit that hangs was the first one I got.

Bios version (5.26) appear the same on all units although with different bios dates:
09/26/2023 on units 1and 2 vs 01/16/2024 on unit 3, but disk firmware differs on all units:

Device Model: N900-512
Firmware Version: W0825A (The one that has issues)
Firmware Version: W0220A0 (no issues)
Firmware Version: W0704A0 (no issues)

With these units, it is rather hard to source new bios versions and it would be almost impossible to find a disk firmware update. And if by some luck you find an update, the chances of bricking are too high, since you can never be too sure of the naming and versioning scheme of these units.

What I haven't tried, is using another drive, installing and upgrading proxmox and see if it has the same issues again.

Workaround for the time being is to boot the latest 6.8 kernel on the problematic unit and have no issues of stability with the older kernel.
 
Last edited:
For completion's sake what ultimately fixed it was adding
Code:
libata.force=nolpm
to the boot arguments.
Hmm.. You got me thinking. I was also heaving problems with booting Firebat T8 under 6.14 and 6.17 kernels. But I though it was caused by my ASM1064 adapter (link) - libata.force=nolpm allowed it to boot correctly, but I think I have not even tried to boot it without ASM1064 adapter.

I've just checked the N900-512 SSD (i still have not swapped it to anything high quality) and it's firmware is the same as yours - `W0825A0`.

So maybe it's the SSD that's culprit, not the ASM1064 adapter. Anyway, I've already ordered NVMe drive - remember that the M.2 slot is PCIe 3.0 x2, which would give you around 1700MB/s while N900 is a SATA drive. These cheap N900 drives are known to die, my unit currently has a write speed of around 25MB for some reason (after 14k hours of using it).
 
Last edited:
Hi again!

Today I've replaced N900 for Corsair MP600 (R2) 1TB - and you were right! It wasn't the ASM1064 that was causing the issue, it was the N900 with W0825A0 firmware!

After I've put the NVMe in my Firebat T8, I no longer needed `libata.force=nolpm` which I guess is a good thing.

Again a tip from me - do yourself a favour and replace your drives as well, it's wasted potential to use cheap SATA SSD (that might die or malfunction like mine) while you can utilise NVMe at PCIe 3.0 x2 speed.