Error preparing initrd: Bad Buffer Size

I upgraded from 7 to 8 and got the same. Clean install on DELL R620 and also the same for proxmox 8. Error preparing initrd: Bad Buffer Size.

I upgraded from 7 to 8 and works fine.

I am using UEFI and Software (proxmox) ZFS RAID 1 SSD.

Perc h710 mini flashed to IT mode.
 
Last edited:
Just attempted a clean install of 8.0-2 on a r720xd and am hitting the same message.

Using 3x drives in zraid1
240GB ECC RAM
Perc h710 mini
 
Just attempted a clean install of 8.0-2 on a r720xd and am hitting the same message.

Using 3x drives in zraid1
240GB ECC RAM
Perc h710 mini
Are you using UEFI or BIOS? I didn't check it. But I think it might be a problem with UEFI. I know unraid has issues with UEFI on dell too.
 
Are you using UEFI or BIOS? I didn't check it. But I think it might be a problem with UEFI. I know unraid has issues with UEFI on dell too.
UEFI.

I tried to go back to 7.4 and then it booted to messages about not being able to import the zfs pool because there were duplicate names. After fiddling around with that I reinstalled 7.4 and now it works again.

Maybe the underlying zfs issue was part of my problem with 8.0-2, in addition to UEFI weirdness.
 
I have upgraded my test cluster with r620 (some node in lvm, other in zfs), I don't have this error, they are booting fine.

the only difference is that I don't use the internal perc, but an extra controller in job mode.

05:00.0 Serial Attached SCSI controller: Broadcom / LSI SAS2308 PCI-Express Fusion-MPT SAS-2 (rev 05)
 
I have upgraded my test cluster with r620 (some node in lvm, other in zfs), I don't have this error, they are booting fine.

the only difference is that I don't use the internal perc, but an extra controller in job mode.

05:00.0 Serial Attached SCSI controller: Broadcom / LSI SAS2308 PCI-Express Fusion-MPT SAS-2 (rev 05)
Are you booting pve with UEFI or BIOS? The problem only seems to occur when booting via UEFI.

Edit: also, I'm not certain, but I believe that problem systems upgraded to v8 will boot, but new installs won't.
 
Last edited:
For anyone stumbling upon this, I was able to circumvent this issue by turning on fast boot. Hilariously fast boot is actually slower, but it seems to properly load some OROM driver that doesn't load on "slow" boot. Go figure.
 
  • Like
Reactions: Scriptsandthings
For anyone stumbling upon this, I was able to circumvent this issue by turning on fast boot. Hilariously fast boot is actually slower, but it seems to properly load some OROM driver that doesn't load on "slow" boot. Go figure.
Neat. I’ll try it on a system I’m about to setup again. Already takes forever to boot so nothing to lose there.
 
Hi,
I'm having this same issue on a dell t420 server, I can't use fast boot I don't think so I was dead in the water and went back to 7.4 but I did find this post, https://github.com/systemd/systemd/issues/25911 and it is very similar to this issue. hope it helps
yes, this issue is already linked earlier in this thread and the bugzilla entry ;) And @t.lamprecht did a backport of its fix now: https://git.proxmox.com/?p=systemd.git;a=commitdiff;h=ebe43c4ec7ad7c60fef7e2bcb8ae4d1a3adeacd1

The systemd packages with the fix are currently available in the pvetest repository, and the other repositories will follow if nothing comes up.
 
  • Like
Reactions: kifeo
is there an install ISO available to update with the fix ?
No, but if you are affected, you can either install 7.4 and upgrade to 8.0 or use the debug mode during install, which will drop you into a shell after the installation finished, where you can upgrade the systemd package(s).
 
  • Like
Reactions: Stoiko Ivanov
drop you into a shell after the installation finished, where you can upgrade the systemd package(s).
Is there a process described anywhere for doing this? I pulled the latest systemd-boot & -boot-efi debs from the repository, but I don't think it's as simple a matter as "dpkg -i <deb>"

Edit: I was able to chroot into a fresh install using [most of] the steps listed here, then manually installed libsystemd-shared (dependency), systemd-boot, and systemd-boot-efi packages from the no-sub repository, all version 252.11-pve1 (which includes the "small chunks" backport). Ran "pve-efiboot-tool refresh" afterwards, then backed-out and rebooted, but it didn't help, still getting "Bad Buffer Size" error on boot.

Is there a step I may have missed?

Edit 2: In the interest of trying everything, I booted up the PVE8 install media, dropped to shell, chrooted back into the previously mentioned install of PVE8, and after getting a rudimentary network connection up, did an "apt-get update/upgrade" against the debian bookworm & pve-no-sub repositories. After repairing the install of the systemd 252.11-pve1 debs I installed earlier (version mismatch, fixed with "apt-get --fix-broken install"), the upgrade completed without incident. (For the record, the systemd-* packages installed at this point are 252.12-pmx1). Ran "proxmox-boot-tool refresh" again, because why not. Backed-out of the chroot, rebooted, and it still doesn't work.
 
Last edited:
Looking for a fix for this too - new install on lenovo server UEFI/ZFS with HBA results in dead boot - ok with PVE7
 
Last edited:
No, but if you are affected, you can either install 7.4 and upgrade to 8.0 or use the debug mode during install, which will drop you into a shell after the installation finished, where you can upgrade the systemd package(s).

Can confirm, issue persists on an R620, and chroot-ing to do an update from the pvetest repos doesn't solve :(

Here's a high-level of what I did:
Run installer via debug, Installer drops into shell after installation completes and you click "reboot"
zpool import rpool
chroot /
ifconfig eno3 10.1.10.30 netmask 255.255.255.0 up
route add default gw 10.1.10.1
wget https://enterprise.proxmox.com/debian/proxmox-release-bookworm.gpg -O /etc/apt/trusted.gpg.d/proxmox-release-bookworm.gpg
add "deb http://download.proxmox.com/debian/pve bookworm pvetest" to /etc/sources.list
apt update
apt upgrade -y
proxmox-boot-tool refresh
CRTL+D to exit debug shell and continue reboot

So as of now, if you have a Dell *20 generation machine, you're SOL for PVE8 without doing an upgrade from 7.4.x.
 
Hi,
seems like there is still some command missing to make it work. I'm not sure proxmox-boot-tool refresh is enough to update the bootloader itself, I think it only updates the boot configuration.

EDIT: @Stoiko Ivanov pointed out that there is
Code:
USAGE: /usr/sbin/proxmox-boot-tool reinit

    reinitialize all configured EFI system partitions from /etc/kernel/proxmox-boot-uuids.
so what I wrote below can be done in one go with that command.

Can you try doing
Code:
proxmox-boot-tool init /dev/sda2
for each of your ESPs? This should reinstall systemd-boot on the partition. At least I get
Code:
Copied "/usr/lib/systemd/boot/efi/systemd-bootx64.efi" to "/var/tmp/espmounts/4BBB-2672/EFI/systemd/systemd-bootx64.efi".
Copied "/usr/lib/systemd/boot/efi/systemd-bootx64.efi" to "/var/tmp/espmounts/4BBB-2672/EFI/BOOT/BOOTX64.EFI".
 
Last edited:
  • Like
Reactions: ClemontX

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!