[SOLVED] Mint-Ubuntu-Debian doesn't installing\launching on UEFI

destramento

New Member
Aug 6, 2023
4
0
1
So, im trying to install Mint-Ubuntu-Debian on vm, and it doesnt work with UEFI, noone of this, so as result i stopped on Mint (if it help, its linuxmint-22-cinnamon-64bit.iso) to test different solutions (but no result atm).

Problem:
Before installation end, linux requesting remove cd-rom. If i dont do that - instalation starting again (but looks like alls data loaded - linux suggests installing again os over current (mean os data here, but doesnt boot)). If i removing cdrom, launch starting with *>>>Start PXE over IPv4* default error like system cant find anything to launch. So, after installing linux with UEFI, i cant launch os.

Things i tried:
With new vm, tried disable *pre-enroll keys*, doesnt work.
Secure boot - not sure, what exacly i gonna do here. As i undestand, Secure Boot in current vm bios defaultly Disabled if pre-enroll keys checkbox disabled. I tried enable *pre-enroll keys* and disable *Attempt secure boot* in bios, looks like it doesnt work or i just dont undestand order, what to select and when - before install or after, as well you can remove enrolled keys, so not sure is it work and when i should do this.

SeaBIOS works fine, looks like i can work in it, but at the moment i would try to solve the problem with uefi if it is solvable.
PVE 8.1.3 on LVM, vm q35, not sure whats data can be helpture here, screenshot below

I also tried same .iso on Hyper-v (different pc), Secure Boot disabled, looks like os launching half-times - it can launch or cant, and no logical dependencies here. I don't know, maybe Linux systems don't work at all or work incorrectly with UEFI

1726072255559.png
 
Hi,
please share the output of pveversion -v and qm config <ID> for an affected VM. What do you see when you check the possible boot options from the OVMF menu after installation: https://pve.proxmox.com/wiki/OVMF/UEFI_Boot_Entries
Hi, thanks for answer, data below

root@pve1:~# pveversion -v
proxmox-ve: 8.1.0 (running kernel: 6.2.16-3-pve)
pve-manager: 8.1.3 (running version: 8.1.3/b46aac3b42da5d15)
proxmox-kernel-helper: 8.1.0
pve-kernel-6.2: 8.0.5
proxmox-kernel-6.5: 6.5.11-7
proxmox-kernel-6.5.11-7-pve-signed: 6.5.11-7
proxmox-kernel-6.2.16-20-pve: 6.2.16-20
proxmox-kernel-6.2: 6.2.16-20
pve-kernel-6.2.16-3-pve: 6.2.16-3
pve-kernel-6.2.16-2-pve: 6.2.16-2
pve-kernel-6.2.16-1-pve: 6.2.16-1
ceph-fuse: 17.2.6-pve1+3
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx7
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-4
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.0
libproxmox-backup-qemu0: 1.4.1
libproxmox-rs-perl: 0.3.3
libpve-access-control: 8.0.7
libpve-apiclient-perl: 3.3.1
libpve-common-perl: 8.1.0
libpve-guest-common-perl: 5.0.6
libpve-http-server-perl: 5.0.5
libpve-network-perl: 0.9.5
libpve-rs-perl: 0.8.7
libpve-storage-perl: 8.0.5
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 5.0.2-4
lxcfs: 5.0.3-pve4
novnc-pve: 1.4.0-3
proxmox-backup-client: 3.1.2-1
proxmox-backup-file-restore: 3.1.2-1
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.2.2
proxmox-mini-journalreader: 1.4.0
proxmox-widget-toolkit: 4.1.3
pve-cluster: 8.0.5
pve-container: 5.0.8
pve-docs: 8.1.3
pve-edk2-firmware: 4.2023.08-2
pve-firewall: 5.0.3
pve-firmware: 3.9-1
pve-ha-manager: 4.0.3
pve-i18n: 3.1.5
pve-qemu-kvm: 8.1.2-6
pve-xtermjs: 5.3.0-3
qemu-server: 8.0.10
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.2-pve1
balloon: 0
bios: ovmf
boot: order=scsi0;ide2;net0
cores: 4
cpu: host
efidisk0: local:963/vm-963-disk-0.raw,efitype=4m,size=528K
ide2: none,media=cdrom
machine: q35
memory: 4096
meta: creation-qemu=8.1.2,ctime=1726129262
name: linux-mint-test-3
net0: e1000=BC:24:11:12:34:53,bridge=vmbr0
numa: 0
ostype: l26
scsi0: local:963/vm-963-disk-1.raw,backup=0,cache=writeback,size=32G
smbios1: uuid=7058e12b-71f8-4e4d-8daa-de6f36d638f6
sockets: 1
vmgenid: a0a81c53-5003-409f-a35b-3f397dd9811d
1726130800923.png1726130874499.png
 
Please upgrade to current versions and see if the issue persists (IIRC there were some fixes regarding the LSI SCSI controller):
https://pve.proxmox.com/pve-docs/chapter-sysadmin.html#system_software_updates
https://pve.proxmox.com/wiki/Package_Repositories

Does changing the SCSI controller to VirtIO SCSI (single) help?
VirtIO SCSI (single) works, thanks!
Is there way to call back apt-get dist-upgrade or save current config versions (not sure how's it can be called)? I use gpu passthrough for some VMs but it need some outdated packages, so im affraid update anything atm without chance to call *downgrade* later
 
No, it's not generally possible to downgrade a whole Proxmox VE (or Debian) installation. For passthrough, kernel and QEMU should be the most relevant packages and those, you can downgrade (or configure booting into an older kernel) if required. But of course, I cannot guarantee you anything, especially if you are using third-party drivers or something. GPU is passthrough is finicky.
 
No, it's not generally possible to downgrade a whole Proxmox VE (or Debian) installation. For passthrough, kernel and QEMU should be the most relevant packages and those, you can downgrade (or configure booting into an older kernel) if required. But of course, I cannot guarantee you anything, especially if you are using third-party drivers or something. GPU is passthrough is finicky.
Haha yeah i felt that when i tried to do gpu passthrough following the tutorials, pretty painful. VirtIO SCSI works, thanks for help. In general, can linux, uefi and default scsi controller work together? I mean, are there any technical issues where they can't work together, or they can, and pve update can solve this, right?
 
Haha yeah i felt that when i tried to do gpu passthrough following the tutorials, pretty painful. VirtIO SCSI works, thanks for help. In general, can linux, uefi and default scsi controller work together? I mean, are there any technical issues where they can't work together, or they can, and pve update can solve this, right?
Oh right, I remember now, the OVMF firmware is built without LSI support, because it is not maintained. But there is no advantage to use the default controller over VirtIO. VirtIO in QEMU is better maintained and has better performance. It's just the backend default for historical reasons to not break compatibility. The UI won't default to the LSI controller either.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!