9.0 Beta - VM Fails to start - get_drive_id: no interface

toe

Member
Nov 23, 2022
7
1
8
After a clean and error free upgrade.
Starting a EFI based VM results in

Code:
# qm start 115
get_drive_id: no interface at /usr/share/perl5/PVE/QemuServer/Drive.pm line 864.

pveversion -v
Code:
proxmox-ve: 9.0.0 (running kernel: 6.14.8-1-pve)
pve-manager: 9.0.0~8 (running version: 9.0.0~8/08dc1724dedced56)
proxmox-kernel-helper: 9.0.0
pve-kernel-6.2: 8.0.5
pve-kernel-5.15: 7.4-4
pve-kernel-6.1: 7.3-6
proxmox-kernel-6.14.8-1-pve-signed: 6.14.8-1
proxmox-kernel-6.14: 6.14.8-1
proxmox-kernel-6.14.8-1-bpo12-pve-signed: 6.14.8-1~bpo12+1
proxmox-kernel-6.14.5-1-bpo12-pve-signed: 6.14.5-1~bpo12+1
proxmox-kernel-6.11.11-2-pve-signed: 6.11.11-2
proxmox-kernel-6.11: 6.11.11-2
proxmox-kernel-6.8.12-12-pve-signed: 6.8.12-12
proxmox-kernel-6.8: 6.8.12-12
proxmox-kernel-6.5.13-6-pve-signed: 6.5.13-6
proxmox-kernel-6.5: 6.5.13-6
proxmox-kernel-6.2.16-20-pve: 6.2.16-20
proxmox-kernel-6.2: 6.2.16-20
pve-kernel-6.1.15-1-pve: 6.1.15-1
pve-kernel-5.15.108-1-pve: 5.15.108-1
pve-kernel-5.15.74-1-pve: 5.15.74-1
amd64-microcode: 3.20250311.1
ceph-fuse: 19.2.2-pve2
corosync: 3.1.9-pve2
criu: 4.1-1
ifupdown2: 3.3.0-1+pmx7
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-5
libproxmox-acme-perl: 1.7.0
libproxmox-backup-qemu0: 2.0.1
libproxmox-rs-perl: 0.4.1
libpve-access-control: 9.0.2
libpve-apiclient-perl: 3.4.0
libpve-cluster-api-perl: 9.0.2
libpve-cluster-perl: 9.0.2
libpve-common-perl: 9.0.6
libpve-guest-common-perl: 6.0.2
libpve-http-server-perl: 6.0.1
libpve-network-perl: 1.1.0
libpve-rs-perl: 0.10.4
libpve-storage-perl: 9.0.6
libspice-server1: 0.15.2-1+b1
lvm2: 2.03.31-2
lxc-pve: 6.0.4-2
lxcfs: 6.0.4-pve1
novnc-pve: 1.6.0-3
proxmox-backup-client: 4.0.2-1
proxmox-backup-file-restore: 4.0.2-1
proxmox-backup-restore-image: 1.0.0
proxmox-firewall: 1.0.0
proxmox-kernel-helper: 9.0.0
proxmox-mail-forward: 1.0.1
proxmox-mini-journalreader: 1.6
proxmox-widget-toolkit: 5.0.2
pve-cluster: 9.0.2
pve-container: 6.0.2
pve-docs: 9.0.4
pve-edk2-firmware: 4.2025.02-4
pve-esxi-import-tools: 1.0.0
pve-firewall: 6.0.2
pve-firmware: 3.16-3
pve-ha-manager: 5.0.1
pve-i18n: 3.5.0
pve-qemu-kvm: 10.0.2-4
pve-xtermjs: 5.5.0-2
qemu-server: 9.0.4
smartmontools: 7.4-pve1
spiceterm: 3.4.0
swtpm: 0.8.0+pve2
vncterm: 1.9.0
zfsutils-linux: 2.3.3-pve1

Did a bit of searching, but this one seems to be a new issue.

After the upgrade now running pve8to9 --full produces an error

Code:
# pve8to9 --full
= CHECKING VERSION INFORMATION FOR PVE PACKAGES =

Checking for package updates..
PASS: all packages up-to-date

Checking proxmox-ve package version..
PASS: already upgraded to Proxmox VE 9

Checking running kernel version..
PASS: running new kernel '6.14.8-1-pve' after upgrade.
INFO: Found outdated kernel meta-packages, taking up extra space on boot partitions.
      After a successful upgrade, you can remove them using this command:
      apt remove pve-kernel-6.2

= CHECKING CLUSTER HEALTH/SETTINGS =

PASS: systemd unit 'pve-cluster.service' is in state 'active'
PASS: systemd unit 'corosync.service' is in state 'active'
PASS: Cluster Filesystem is quorate.

Analzying quorum settings and state..
INFO: configured votes - nodes: 3
INFO: configured votes - qdevice: 0
INFO: current expected votes: 3
INFO: current total votes: 3

Checking nodelist entries..
PASS: nodelist settings OK

Checking totem settings..
PASS: totem settings OK

INFO: run 'pvecm status' to get detailed cluster status..

= CHECKING HYPER-CONVERGED CEPH STATUS =

SKIP: no hyper-converged ceph setup detected!

= CHECKING CONFIGURED STORAGES =

PASS: storage 'Condor' enabled and active.
SKIP: storage 'FortCondor' disabled.
SKIP: storage 'Shinra' disabled.
SKIP: storage 'encrypted_zfs' disabled.
SKIP: storage 'goldcrypt' disabled.
PASS: storage 'local' enabled and active.
PASS: storage 'local-zfs' enabled and active.
INFO: Checking storage content type configuration..
PASS: no storage content problems found
PASS: no storage re-uses a directory for multiple content types.
INFO: Check for usage of native GlusterFS storage plugin...
PASS: No GlusterFS storage found.

= VIRTUAL GUEST CHECKS =

INFO: Checking for running guests..
PASS: no running guest detected.
INFO: Checking if LXCFS is running with FUSE3 library, if already upgraded..
PASS: systems seems to be upgraded and LXCFS is running with FUSE 3 library
INFO: Checking for VirtIO devices that would change their MTU...
Undefined subroutine &PVE::QemuServer::parse_net called at /usr/share/perl5/PVE/CLI/pve8to9.pm line 1704.
 
Code:
agent: 1
bios: ovmf
boot: order=scsi0
cores: 4
cpu: host
machine: q35
memory: 16384
meta: creation-qemu=6.1.0,ctime=1641651421
name: Chadley
net0: virtio=E2:82:D3:19:90:63,bridge=vmbr0,tag=2
numa: 0
onboot: 1
ostype: l26
scsi0: local-zfs:vm-115-disk-1,size=64G
scsihw: virtio-scsi-pci
serial0: socket
smbios1: uuid=d961dda8-b50a-42cf-9456-d08a57fd7c75
sockets: 1
tablet: 0
tags: vm
vmgenid: 4ccf6ff2-def2-4ce2-b5b7-163b9f5832fc
usb0: host=1a86:55d4
usb1: host=0658:0200
usb2: host=1a6e:089a
 
By way of refernece. I moved the storage location to no effect (not as a troubleshooting measure, but for migration).
In the end worked around this for now by commenting out the two new check lines in /usr/share/perl5/PVE/QemuServer/Drive.pm at line 864.

This probably is not the best idea I've ever had, but its a test node and it does get it booting again.
 
That'll do the trick. Ty

added via
qm set 115 --efidisk0 local-zfs:0

Uncommented the checks and restarted fine.

Perhaps Interestingly though maybe intended,
$pve8to9 --full still exits early with

Undefined subroutine &PVE::QemuServer::parse_net called at /usr/share/perl5/PVE/CLI/pve8to9.pm line 1704.

The host in question only has the one VM on it (no containers either) but that said everything seems fine.
 
Perhaps Interestingly though maybe intended,
$pve8to9 --full still exits early with

Undefined subroutine &PVE::QemuServer::parse_net called at /usr/share/perl5/PVE/CLI/pve8to9.pm line 1704.

The host in question only has the one VM on it (no containers either) but that said everything seems fine.
That is due to a bug due stemming to some refactoring that happened in PVE 9 but not in PVE 8, it should be fixed in pve-manager 9.0.0~10 that just got uploaded. Thanks for your report!
 
Cheers!

Updated and confirmed that outputs full with no problems on 9.0.0~10.

Thank you one and all :D
 
I ran into this today doing an upgrade on a 2 node cluster:

Code:
proxmox-ve: 9.0.0 (running kernel: 6.14.8-1-pve)
pve-manager: 9.0.0~10 (running version: 9.0.0~10/0fef50945ccd3b7e)
proxmox-kernel-helper: 9.0.0
proxmox-kernel-6.14.8-1-pve-signed: 6.14.8-1
proxmox-kernel-6.14: 6.14.8-1
proxmox-kernel-6.14.8-1-bpo12-pve-signed: 6.14.8-1~bpo12+1
proxmox-kernel-6.14.5-1-bpo12-pve-signed: 6.14.5-1~bpo12+1
proxmox-kernel-6.8.12-12-pve-signed: 6.8.12-12
proxmox-kernel-6.8: 6.8.12-12
ceph-fuse: 19.2.2-pve2
corosync: 3.1.9-pve2
criu: 4.1-1
ifupdown2: 3.3.0-1+pmx7
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-5
libproxmox-acme-perl: 1.7.0
libproxmox-backup-qemu0: 2.0.1
libproxmox-rs-perl: 0.4.1
libpve-access-control: 9.0.2
libpve-apiclient-perl: 3.4.0
libpve-cluster-api-perl: 9.0.2
libpve-cluster-perl: 9.0.2
libpve-common-perl: 9.0.6
libpve-guest-common-perl: 6.0.2
libpve-http-server-perl: 6.0.1
libpve-network-perl: 1.1.0
libpve-rs-perl: 0.10.4
libpve-storage-perl: 9.0.6
libspice-server1: 0.15.2-1+b1
lvm2: 2.03.31-2
lxc-pve: 6.0.4-2
lxcfs: 6.0.4-pve1
novnc-pve: 1.6.0-3
proxmox-backup-client: 4.0.3-1
proxmox-backup-file-restore: 4.0.3-1
proxmox-backup-restore-image: 1.0.0
proxmox-firewall: 1.0.0
proxmox-kernel-helper: 9.0.0
proxmox-mail-forward: 1.0.1
proxmox-mini-journalreader: 1.6
proxmox-offline-mirror-helper: 0.7.0
proxmox-widget-toolkit: 5.0.2
pve-cluster: 9.0.2
pve-container: 6.0.2
pve-docs: 9.0.4
pve-edk2-firmware: 4.2025.02-4
pve-esxi-import-tools: 1.0.0
pve-firewall: 6.0.2
pve-firmware: 3.16-3
pve-ha-manager: 5.0.1
pve-i18n: 3.5.0
pve-qemu-kvm: 10.0.2-4
pve-xtermjs: 5.5.0-2
qemu-server: 9.0.4
smartmontools: 7.4-pve1
spiceterm: 3.4.0
swtpm: 0.8.0+pve2
vncterm: 1.9.0
zfsutils-linux: 2.3.3-pve1

I edited the following lines in /usr/share/perl5/PVE/QemuServer/Drive.pm to get the VMs to be able to boot again and come back online.
Code:
sub get_drive_id {
    my ($drive) = @_;

    #die "get_drive_id: no interface" if !defined($drive->{interface});
    #die "get_drive_id: no index" if !defined($drive->{index});

    return "$drive->{interface}$drive->{index}";
}
 
Last edited:
Thanks guys.

Installed these updates just now:
Code:
Get:1 http://download.proxmox.com/debian/pve trixie/pve-test amd64 apparmor amd64 4.1.1-pmx1 [711 kB]
Get:2 http://download.proxmox.com/debian/pve trixie/pve-test amd64 libapparmor1 amd64 4.1.1-pmx1 [43.8 kB]
Get:3 http://download.proxmox.com/debian/pve trixie/pve-test amd64 libpve-rs-perl amd64 0.10.5 [2,994 kB]
Get:4 http://download.proxmox.com/debian/pve trixie/pve-test amd64 pve-container all 6.0.3 [146 kB]
Get:5 http://download.proxmox.com/debian/pve trixie/pve-test amd64 qemu-server amd64 9.0.5 [323 kB]

I could successfully stop and start a VM again - after checking that the manual modifications to Drive.pm listed earlier were undone and back to their factory state.