/usr/share/ifupdown2/ifupdown/main.pyI believe I have the same issue. Where did you edit the ifupdown2 file to make the replacement?
parser.readfp(configFP)
Need change to:
parser.read_file(configFP)
Last edited:
/usr/share/ifupdown2/ifupdown/main.pyI believe I have the same issue. Where did you edit the ifupdown2 file to make the replacement?
This worked. My issue was not with the PVE upgrade to 9.0. That went smooth as butter. I then upgraded the Proxmox Backup Server to 4.0, and after reboot the networking was hard down. Apparently part of the process upgraded python, and the latest version replaced the syntax used for that particular method. This fixed the networking issues, but sure seems unrelated to Proxmox, and mode related to Linux directly. Scary as this could of happened to any user that upgraded the distribution to Trixie./usr/share/ifupdown2/ifupdown/main.py
parser.readfp(configFP)
After I got the networking back up, I did find a pbs-enterprise.sources file and # all the lines out. This allowed the apt update to run smoothly with no errors. Thank you>The main issue here is probably that the sources for the Debian repositories got updated to trixie, but there are still bookworm PVE repositories left. Please make sure that every repository is correctly configured and then dist-upgrade.
Thanks, this worked for me!ChatGPT helped me fix it. Sharing it here in case someone else needs.
First, you need to repair the broken thin‐pool metadata, then bring it (and its VG) back online, and finally re-enable it in Proxmox:
1. Repair the thin-pool metadata
The “status:64” error means the pool’s metadata is corrupt or full. Repair it with:
lvconvert --repair /dev/pve/data
This reads the damaged metadata LV, writes a repaired copy from the pmspare LV, and swaps it back in place.
2. Activate the VG and pool
Once the repair finishes without errors, activate your volume group and the pool itself:
vgchange -ay
lvchange -ay /dev/pve/data
This makes /dev/pve/data writable and available again.
You can confirm it’s active when lvs shows w (writable) instead of i in the third attribute column for “data”:
lvs -o lv_name,lv_attr
3. Re-enable the storage in Proxmox
Finally, clear the “disabled” flag so Proxmox will use it:
pvesm set local-lvm --disable 0
systemctl restart pvedaemon pveproxy pvestatd
This flips local-lvm back on in /etc/pve/storage.cfg and reloads the storage daemons.
Verify with:
pvesm status
You should now seelocal-lvm
listed as active again.
I believe I'm experiencing this same issue (unable to boot after upgrade; eventually it takes me to BIOS). Proxmox is installed on an ext4 partition on an NVMe. I don't know how to resolve this. Please help.Just hopping in to say I upgraded from 8.4.1 to 9 and it wiped my efi partition. Unlike ulistermclane, I had a default ext4 lvm partition.
zfs set direct=always rpool
apt install sysstat -y
apt install fio -y
rm /dev/zvol/rpool/data/test.file
fio --filename=/dev/zvol/rpool/data/test.file --name=sync_randrw --rw=randrw --bs=4M --direct=0 --sync=1 --numjobs=1 --ioengine=psync --iodepth=1 --refill_buffers --size=8G --loops=2 --group_reporting
rm /dev/zvol/rpool/data/test.file
fio --filename=/dev/zvol/rpool/data/test.file --name=sync_randrw --rw=randrw --bs=4M --direct=1 --sync=1 --numjobs=1 --ioengine=psync --iodepth=1 --refill_buffers --size=8G --loops=2 --group_reporting
zfs set direct=always rpool
rsync -aP /dev/zvol/rpool/data/test_random_disk.qcow2 /dev/zvol/rpool/data/test_random_disk_io_direct_enabled.qcow2
zfs set direct=disabled rpool
rsync -aP /dev/zvol/rpool/data/test_random_disk.qcow2 /dev/zvol/rpool/data/test_random_disk_io_direct_disabled.qcow2
fsutil file createnew C:\test_disk.dat 8589934592
Thank you so much, my friend. You’re genuinely a blessing in my life./usr/share/ifupdown2/ifupdown/main.py
parser.readfp(configFP)
I solved my issue by:I believe I'm experiencing this same issue (unable to boot after upgrade; eventually it takes me to BIOS). Proxmox is installed on an ext4 partition on an NVMe. I don't know how to resolve this. Please help.
Worth noting, using the Proxmox installation ISO version 9, and choosing Rescue Boot, it boots me into my local Proxmox.
proxmox-boot-tool init /dev/nvme0n1p2
This was going to be my next step. I tried it before I made my post but also my iso did not have the grub packages install so I just shut it down and decided I would come back to it later.I solved my issue by:
Essentially, my boot partition needed to be initialized. I'm not quite sure why it didn't happen automatically during the upgrade.
- Booting into Rescue Mode with the Proxmox Installation ISO (version 9) which booted into my local Proxmox install.
- Running
proxmox-boot-tool init /dev/nvme0n1p2
- /dev/nvme0n1p2 is my EFI boot partition
- Reboot
We tried to make those checks as safe as possible so this should not cause issues.
A bit of background - currently systems:
* having root on ZFS or BTRFS
* booting using UEFI (not legacy bios boot)
* not having secure-boot enabled
use systemd-boot for booting
`proxmox-boot-tool status` should provide some helpful information
Additionally the `systemd-boot` package got split up a bit further in trixie - and proxmox-boot-tool only needs `systemd-boot-tools` and `systemd-boot-efi` - the `systemd-boot` meta-package is currently incompatible (as it tries updating the EFI, despite it not being mounted, which causes an error upon upgrade)
I hope this helps!
root@andromeda2:~# apt search systemd-boot
Sorting... Done
Full Text Search... Done
proxmox-kernel-helper/stable,now 8.1.4 all [installed]
Function for various kernel maintenance tasks.
pve-kernel-helper/stable 7.3-4 all
Function for various kernel maintenance tasks.
systemd-boot/stable-security,now 252.38-1~deb12u1 amd64 [installed]
simple UEFI boot manager - tools and services
systemd-boot-dbgsym/stable 252.12-pmx1 amd64
debug symbols for systemd-boot
systemd-boot-efi/stable-security,now 252.38-1~deb12u1 amd64 [installed]
simple UEFI boot manager - EFI binaries
Systemd-boot meta-package changes the bootloader configuration automatically and should be uninstalled
With Debian Trixie the systemd-boot package got split up a bit further into systemd-boot-efi (containing the EFI-binary used for booting), systemd-boot-tools (containing bootctl) and the systemd-boot meta-package (containing hooks which run upon upgrades of itself and other packagesand install systemd-boot as bootloader).
As Proxmox Systems usually handle the installation of systemd-boot-efi as bootloader using proxmox-boot-tool the meta-package systemd-boot should be removed.The package was automatically shipped for systems installed from the PVE 8.1 to PVE 8.4 ISOs, as it contained bootctl in bookworm.
If the pve8to9 checklist script suggests it, the systemd-boot meta-package is safe to remove unless you manually installed it and are using systemd-boot as a bootloader. Should systemd-boot-efi and systemd-boot-tools be required, pve8to9 will warn you accordingly.
I'm not sure if that means I'm okay to upgrade or not. This is really confusing.INFO: Checking bootloader configuration...
SKIP: not yet upgraded, systemd-boot still needed for bootctl
pve8to9 --full
after upgrade I get. Running pve8to9--full
prior update returns no warning.WARN: Found '3' RRD files that have not yet been migrated to the new schema.
pve-storage-9.0/node2/nfs-storage1
pve-storage-9.0/node2/nfs-storage1
pve-storage-9.0/node2/nfs-storage1
Please run the following command manually:
/usr/libexec/proxmox/proxmox-rrd-migration-tool --migrate
/usr/libexec/proxmox/proxmox-rrd-migration-tool --migrate
returnsMigrating RRD metrics data for nodes…
Migrated metrics of all nodes to new format
Migrating RRD metrics data for storages…
Migrated metrics of all storages to new format
Migrating RRD metrics data for virtual guests…
Using 6 thread(s)
No guest metrics to migrate
that was not the right part of the log (that seems to have been an upgrade to 8.3/8.4 packages?) - but it also looks like it got interrupted at the grub install stage which is peculiar..term.log attached.
Sequence that hung is the second last on the log.
YesI'm using ZFS with UEFI, and secure boot disabled, and SystemD-Boot is used on my nodes instead of Grub, so this applies to me.
Yes - because you're still on PVE-8 (bookworm) - I thought that the following part should explain that:I appear to have the full SystemD boot package installed:
With Debian Trixie the systemd-boot package got split up a bit further into systemd-boot-efi (containing the EFI-binary used for booting), systemd-boot-tools (containing bootctl) and the systemd-boot meta-package (containing hooks which run upon upgrades of itself and other packages and install systemd-boot as bootloader).
I will warn you after upgrading to 9 (at that point the system will have `systemd-boot-tools`, `systemd-boot-efi` (both of which we need) and `systemd-boot` (which you want to get rid of installed)INFO: Checking bootloader configuration...
SKIP: not yet upgraded, systemd-boot still needed for bootctl
Thanks for the feedback - much appreciated (the clearer our guides are the smoother the upgrades for our community will be) - any suggestions how to improve that part? - Thanks!I'm not sure if that means I'm okay to upgrade or not. This is really confusing.
We use essential cookies to make this site work, and optional cookies to enhance your experience.