PVE 7.4-3 After upgrade suddenly systemd as bootloader but no /etc/kernel/cmdline

riegera2412

New Member
May 25, 2023
3
0
1
Dear all,

I am new to this forum and mostly new to proxmox as well. I had my little proxmox machine runnig for a couple of months now without a glitch. I am really no expert in all of this and was amazed how easy most of the stuff on proxmox is setup and also how good community support and the documentation on the website is. With this problem however I am stuck and dont know what to do also being afraid to break something that cannot be reverted.

After an apt update && apt full-upgrade the server didnt boot up and got stuck at loading initial ramdisk. I resolved this myself with the help of some forum threads, the proxmox iso rescue mode and now the server is up and running mostly normal (normal from my point of view).

However, after booting successfully my system now uses systemd for the bootloader instead of grub, which it definitely used before (blue background menu with all the boot entries). Also, from what I have read in the documentation systemd is only used when root is installed on a zfs-pool during install, which it definitely is not in my case.

While this is not a huge issue for my LXCs it is a VM (windows 11 pro) that was configured with GPU passthrough. All this configuration was done using the walkthrough on the proxmox website and it went without a glitch. I set up the VM and the passthrough worked right away.

Now however, it doesnt since iommu is not enabled due to the original setup with grub as a bootloader. So I checked the walkthrough on the website again and found the section with systemd as a bootloader. There it is stated to add intel_iommu=on iommu=pt to the end of /etc/kernel/cmdline and make sure everything is on one line. However, this file does not exist on the machine so I obviously cannot edit it.

I did not find anything on how to create this file on the system, but saw in a post here on the forum, that some used efibootmgr -v to check something and in the output it also said that /etc/kernel/cmdline cannot be found and that the system uses /proc/cmdline as a fallback option. So I thought I could just copy this file to /etc/kernel/cmdline and use it instead. I did, entered intel_iommu=on iommu=pt and rebooted, which didnt work, but the system threw an error not being able to find an os. So booted up in rescue mode again, removed /etc/kernel/cmdline again and the system booted up as it did before.

Now I am stuck with the VM, since I cannot enable GPU passthrough, because I cannot edit /etc/kernel/cmdline. Any help, any of you can give is greatly appreciated. Also I dont know what information you need from me so I just appended all the commands I came accross when searching for a solution myself. If more info is required I will provide it to my ability.

Best regards,
André

Appendix

root@prometheus:~# pveversion -v proxmox-ve: 7.4-1 (running kernel: 5.15.107-2-pve) pve-manager: 7.4-3 (running version: 7.4-3/9002ab8a) pve-kernel-5.15: 7.4-3 pve-kernel-5.15.107-2-pve: 5.15.107-2 pve-kernel-5.15.107-1-pve: 5.15.107-1 pve-kernel-5.15.102-1-pve: 5.15.102-1 ceph-fuse: 15.2.17-pve1 corosync: 3.1.7-pve1 criu: 3.15-1+pve-1 glusterfs-client: 9.2-1 ifupdown2: 3.1.0-1+pmx4 ksm-control-daemon: 1.4-1 libjs-extjs: 7.0.0-1 libknet1: 1.24-pve2 libproxmox-acme-perl: 1.4.4 libproxmox-backup-qemu0: 1.3.1-1 libproxmox-rs-perl: 0.2.1 libpve-access-control: 7.4-3 libpve-apiclient-perl: 3.2-1 libpve-common-perl: 7.4-1 libpve-guest-common-perl: 4.2-4 libpve-http-server-perl: 4.2-3 libpve-rs-perl: 0.7.6 libpve-storage-perl: 7.4-2 libspice-server1: 0.14.3-2.1 lvm2: 2.03.11-2.1 lxc-pve: 5.0.2-2 lxcfs: 5.0.3-pve1 novnc-pve: 1.4.0-1 proxmox-backup-client: 2.4.2-1 proxmox-backup-file-restore: 2.4.2-1 proxmox-kernel-helper: 7.4-1 proxmox-mail-forward: 0.1.1-1 proxmox-mini-journalreader: 1.3-1 proxmox-widget-toolkit: 3.7.0 pve-cluster: 7.3-3 pve-container: 4.4-3 pve-docs: 7.4-2 pve-edk2-firmware: 3.20230228-2 pve-firewall: 4.3-2 pve-firmware: 3.6-5 pve-ha-manager: 3.6.1 pve-i18n: 2.12-1 pve-qemu-kvm: 7.2.0-8 pve-xtermjs: 4.16.0-1 qemu-server: 7.4-3 smartmontools: 7.2-pve3 spiceterm: 3.2-2 swtpm: 0.8.0~bpo11+3 vncterm: 1.7-1 zfsutils-linux: 2.1.11-pve1

root@prometheus:~# df -h Filesystem Size Used Avail Use% Mounted on udev 16G 0 16G 0% /dev tmpfs 3.2G 1.2M 3.2G 1% /run /dev/mapper/pve-root 21G 4.5G 16G 23% / tmpfs 16G 43M 16G 1% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock /dev/sdc4 433G 42G 369G 11% /mnt/pve/CTs /dev/sdc5 433G 58G 353G 15% /mnt/pve/VMs /dev/sdc2 1022M 140M 883M 14% /boot/efi zfsPool 1.8T 70G 1.7T 4% /mnt/zfsPool zfsPool/dockerData 1.7T 15G 1.7T 1% /mnt/zfsPool/dockerData zfsPool/fileServerData 1.7T 6.5G 1.7T 1% /mnt/zfsPool/fileServerData zfsPool/scanner 1.7T 5.9G 1.7T 1% /mnt/zfsPool/scanner zfsPool/isoData 1.7T 128K 1.7T 1% /mnt/zfsPool/isoData zfsPool/nextcloudData 1.7T 791M 1.7T 1% /mnt/zfsPool/nextcloudData /dev/fuse 128M 20K 128M 1% /etc/pve tmpfs 3.2G 0 3.2G 0% /run/user/0

fdisk -l Disk /dev/loop0: 16 GiB, 17179869184 bytes, 33554432 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/loop1: 50 GiB, 53687091200 bytes, 104857600 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/loop2: 16 GiB, 17179869184 bytes, 33554432 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/loop3: 16 GiB, 17179869184 bytes, 33554432 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/sda: 1.82 TiB, 2000398934016 bytes, 3907029168 sectors Disk model: Samsung SSD 870 Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: gpt Disk identifier: 5CCE1AFE-D06D-4446-9AD7-473C4C8A5497 Device Start End Sectors Size Type /dev/sda1 2048 3907012607 3907010560 1.8T Solaris /usr & Apple ZFS /dev/sda9 3907012608 3907028991 16384 8M Solaris reserved 1 Disk /dev/sdb: 1.82 TiB, 2000398934016 bytes, 3907029168 sectors Disk model: Samsung SSD 870 Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: gpt Disk identifier: 74DA7F5E-E7C7-1440-B7F1-E07F9FD07B95 Device Start End Sectors Size Type /dev/sdb1 2048 3907012607 3907010560 1.8T Solaris /usr & Apple ZFS /dev/sdb9 3907012608 3907028991 16384 8M Solaris reserved 1 Disk /dev/sdc: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors Disk model: SanDisk SDSSDH3 Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: gpt Disk identifier: 74128F28-015F-4847-AE8F-83EB97EE0B38 Device Start End Sectors Size Type /dev/sdc1 34 2047 2014 1007K BIOS boot /dev/sdc2 2048 2099199 2097152 1G EFI System /dev/sdc3 2099200 104857600 102758401 49G Linux LVM /dev/sdc4 104859648 1029192391 924332744 440.8G Linux filesystem /dev/sdc5 1029193728 1953525134 924331407 440.8G Linux filesystem Disk /dev/mapper/pve-swap: 6 GiB, 6442450944 bytes, 12582912 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/mapper/pve-root: 21.5 GiB, 23081254912 bytes, 45080576 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes

root@prometheus:~# proxmox-boot-tool status Re-executing '/usr/sbin/proxmox-boot-tool' in new private mount namespace.. System currently booted with uefi C34E-BF85 is configured with: uefi (versions: 5.15.107-1-pve, 5.15.107-2-pve)

root@prometheus:~# proxmox-boot-tool refresh Running hook script 'proxmox-auto-removal'.. Running hook script 'zz-proxmox-boot'.. Re-executing '/etc/kernel/postinst.d/zz-proxmox-boot' in new private mount namespace.. No /etc/kernel/cmdline found - falling back to /proc/cmdline Copying and configuring kernels on /dev/disk/by-uuid/C34E-BF85 Copying kernel and creating boot-entry for 5.15.107-1-pve Copying kernel and creating boot-entry for 5.15.107-2-pve


root@prometheus:~# efibootmgr -v BootCurrent: 0000 Timeout: 0 seconds BootOrder: 0000,0008,0007,0006 Boot0000* Linux Boot Manager HD(2,GPT,03818ff4-43c5-4963-925f-38ad9a3b5740,0x800,0x200000)/File(\EFI\systemd\systemd-bootx64.efi) Boot0006* Generic Usb Device VenHw(99e275e7-75a0-4b37-a2e6-c5385e6c00cb) Boot0007* UEFI: IP4 Intel(R) 82579LM Gigabit Network Connection PciRoot(0x0)/Pci(0x19,0x0)/MAC(fc4dd4386149,0)/IPv4(0.0.0.00.0.0.0,0,0)AMBO Boot0008* UEFI: IP6 Intel(R) 82579LM Gigabit Network Connection PciRoot(0x0)/Pci(0x19,0x0)/MAC(fc4dd4386149,0)/IPv6([::]:<->[::]:,0,0)AMBO
 
Last edited:
Hey guys,

I just had a second look at it with some fresh eyes after a break and found the solution myself. Apparently the error is caused by proxmox-boot-tool refresh. When it runs it sets the loader-conf-files but there is a "typo". instead of a "\" in the path to the initrd image it sets "^E" which causes the bootloader to not find the correct image. I manually corrected it now and the boot runs through without a glitch and the vm comes up with pci passthrough as well.

Hopefully not of you too too much effort to investigate too much into this. If some of you did I apologize.

André
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!