After a clean and error free upgrade.
Starting a EFI based VM results in
pveversion -v
Did a bit of searching, but this one seems to be a new issue.
After the upgrade now running pve8to9 --full produces an error
Starting a EFI based VM results in
Code:
# qm start 115
get_drive_id: no interface at /usr/share/perl5/PVE/QemuServer/Drive.pm line 864.
pveversion -v
Code:
proxmox-ve: 9.0.0 (running kernel: 6.14.8-1-pve)
pve-manager: 9.0.0~8 (running version: 9.0.0~8/08dc1724dedced56)
proxmox-kernel-helper: 9.0.0
pve-kernel-6.2: 8.0.5
pve-kernel-5.15: 7.4-4
pve-kernel-6.1: 7.3-6
proxmox-kernel-6.14.8-1-pve-signed: 6.14.8-1
proxmox-kernel-6.14: 6.14.8-1
proxmox-kernel-6.14.8-1-bpo12-pve-signed: 6.14.8-1~bpo12+1
proxmox-kernel-6.14.5-1-bpo12-pve-signed: 6.14.5-1~bpo12+1
proxmox-kernel-6.11.11-2-pve-signed: 6.11.11-2
proxmox-kernel-6.11: 6.11.11-2
proxmox-kernel-6.8.12-12-pve-signed: 6.8.12-12
proxmox-kernel-6.8: 6.8.12-12
proxmox-kernel-6.5.13-6-pve-signed: 6.5.13-6
proxmox-kernel-6.5: 6.5.13-6
proxmox-kernel-6.2.16-20-pve: 6.2.16-20
proxmox-kernel-6.2: 6.2.16-20
pve-kernel-6.1.15-1-pve: 6.1.15-1
pve-kernel-5.15.108-1-pve: 5.15.108-1
pve-kernel-5.15.74-1-pve: 5.15.74-1
amd64-microcode: 3.20250311.1
ceph-fuse: 19.2.2-pve2
corosync: 3.1.9-pve2
criu: 4.1-1
ifupdown2: 3.3.0-1+pmx7
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-5
libproxmox-acme-perl: 1.7.0
libproxmox-backup-qemu0: 2.0.1
libproxmox-rs-perl: 0.4.1
libpve-access-control: 9.0.2
libpve-apiclient-perl: 3.4.0
libpve-cluster-api-perl: 9.0.2
libpve-cluster-perl: 9.0.2
libpve-common-perl: 9.0.6
libpve-guest-common-perl: 6.0.2
libpve-http-server-perl: 6.0.1
libpve-network-perl: 1.1.0
libpve-rs-perl: 0.10.4
libpve-storage-perl: 9.0.6
libspice-server1: 0.15.2-1+b1
lvm2: 2.03.31-2
lxc-pve: 6.0.4-2
lxcfs: 6.0.4-pve1
novnc-pve: 1.6.0-3
proxmox-backup-client: 4.0.2-1
proxmox-backup-file-restore: 4.0.2-1
proxmox-backup-restore-image: 1.0.0
proxmox-firewall: 1.0.0
proxmox-kernel-helper: 9.0.0
proxmox-mail-forward: 1.0.1
proxmox-mini-journalreader: 1.6
proxmox-widget-toolkit: 5.0.2
pve-cluster: 9.0.2
pve-container: 6.0.2
pve-docs: 9.0.4
pve-edk2-firmware: 4.2025.02-4
pve-esxi-import-tools: 1.0.0
pve-firewall: 6.0.2
pve-firmware: 3.16-3
pve-ha-manager: 5.0.1
pve-i18n: 3.5.0
pve-qemu-kvm: 10.0.2-4
pve-xtermjs: 5.5.0-2
qemu-server: 9.0.4
smartmontools: 7.4-pve1
spiceterm: 3.4.0
swtpm: 0.8.0+pve2
vncterm: 1.9.0
zfsutils-linux: 2.3.3-pve1
Did a bit of searching, but this one seems to be a new issue.
After the upgrade now running pve8to9 --full produces an error
Code:
# pve8to9 --full
= CHECKING VERSION INFORMATION FOR PVE PACKAGES =
Checking for package updates..
PASS: all packages up-to-date
Checking proxmox-ve package version..
PASS: already upgraded to Proxmox VE 9
Checking running kernel version..
PASS: running new kernel '6.14.8-1-pve' after upgrade.
INFO: Found outdated kernel meta-packages, taking up extra space on boot partitions.
After a successful upgrade, you can remove them using this command:
apt remove pve-kernel-6.2
= CHECKING CLUSTER HEALTH/SETTINGS =
PASS: systemd unit 'pve-cluster.service' is in state 'active'
PASS: systemd unit 'corosync.service' is in state 'active'
PASS: Cluster Filesystem is quorate.
Analzying quorum settings and state..
INFO: configured votes - nodes: 3
INFO: configured votes - qdevice: 0
INFO: current expected votes: 3
INFO: current total votes: 3
Checking nodelist entries..
PASS: nodelist settings OK
Checking totem settings..
PASS: totem settings OK
INFO: run 'pvecm status' to get detailed cluster status..
= CHECKING HYPER-CONVERGED CEPH STATUS =
SKIP: no hyper-converged ceph setup detected!
= CHECKING CONFIGURED STORAGES =
PASS: storage 'Condor' enabled and active.
SKIP: storage 'FortCondor' disabled.
SKIP: storage 'Shinra' disabled.
SKIP: storage 'encrypted_zfs' disabled.
SKIP: storage 'goldcrypt' disabled.
PASS: storage 'local' enabled and active.
PASS: storage 'local-zfs' enabled and active.
INFO: Checking storage content type configuration..
PASS: no storage content problems found
PASS: no storage re-uses a directory for multiple content types.
INFO: Check for usage of native GlusterFS storage plugin...
PASS: No GlusterFS storage found.
= VIRTUAL GUEST CHECKS =
INFO: Checking for running guests..
PASS: no running guest detected.
INFO: Checking if LXCFS is running with FUSE3 library, if already upgraded..
PASS: systems seems to be upgraded and LXCFS is running with FUSE 3 library
INFO: Checking for VirtIO devices that would change their MTU...
Undefined subroutine &PVE::QemuServer::parse_net called at /usr/share/perl5/PVE/CLI/pve8to9.pm line 1704.