Cannot Edit Boot Order with NVMe Drive

chewie198

Active Member
Nov 5, 2016
5
0
41
38
I have a Proxmox VM with an NVMe boot drive passed through to it via PCIe passthrough and am unable to to modify the boot order to prioritize NVMe boot over the UEFI shell. This problem appears to be the same issue posted about here, but that thread is over a month old and received no responses. Has any one else encountered this problem?

It would appear that there are two problems with the way Proxmox behaves here: it does not include passed-through drives in the boot order configuration options, and it also does not save any changes made via the OVMF graphical interface despite the presence of an attached EFI disk. Finding a way to resolve either of these shortcomings should allow for a potential workaround.
 
it may be because ovmf only initializes devices with a 'bootindex', which we do not set for pci passthrough devices

could you try the following:

execute 'qm showcmd ID --pretty' and look for the line of the passthrough (should look like: -device vfio-pci,host... )
copy that into the config under the 'args' property and add a bootindex to it
for example: 102.conf
Code:
args: -device 'vfio-pci,host=02:00.0,id=hostpci0,bus,...,bootindex=50'

and comment the hostpci line

if that does work, we could maybe add an optional bootindex to the hostpci devices which then would enable that
 
  • Like
Reactions: HE_Cole
I was able to modify the config file successfully by adding this line,
Code:
-device 'vfio-pci,host=07:00.0,id=hostpci3,bus=ich9-pcie-port-4,addr=0x0,bootindex=50'
however, I receive this error when trying to start up the VM instance:
Code:
kvm: -device vfio-pci,host=07:00.0,id=hostpci3,bus=ich9-pcie-port-4,addr=0x0,bootindex=50: Bus 'ich9-pcie-port-4' not found

I'm guessing this is because Proxmox is inserting the additional arguments just before the line which defines the pcie root ports.
Code:
-readconfig /usr/share/qemu-server/pve-q35.cfg

Is there any way to work around this? Here is the full output from running the qm start command:
Code:
kvm: -device vfio-pci,host=07:00.0,id=hostpci3,bus=ich9-pcie-port-4,addr=0x0,bootindex=50: Bus 'ich9-pcie-port-4' not found
start failed: command '/usr/bin/kvm -id 105 -name amb47workstation -chardev 'socket,id=qmp,path=/var/run/qemu-server/105.qmp,server,nowait' -mon 'chardev=qmp,mode=control' -chardev 'socket,id=qmp-event,path=/var/run/qemu-server/105-event.qmp,server,nowait' -mon 'chardev=qmp-event,mode=control' -pidfile /var/run/qemu-server/105.pid -daemonize -smbios 'type=1,uuid=abd2ccad-4438-490d-aa22-0e74baf471cb' -drive 'if=pflash,unit=0,format=raw,readonly,file=/usr/share/pve-edk2-firmware//OVMF_CODE.fd' -drive 'if=pflash,unit=1,format=raw,id=drive-efidisk0,file=/dev/zvol/rpool/data/vm-105-disk-0' -smp '8,sockets=1,cores=8,maxcpus=8' -nodefaults -boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' -vga none -nographic -no-hpet -cpu 'host,+kvm_pv_unhalt,+kvm_pv_eoi,hv_vendor_id=proxmox,hv_spinlocks=0x1fff,hv_vapic,hv_time,hv_reset,hv_vpindex,hv_runtime,hv_relaxed,hv_synic,hv_stimer,kvm=off' -m 8192 -device 'vfio-pci,host=07:00.0,id=hostpci3,bus=ich9-pcie-port-4,addr=0x0,bootindex=50' -readconfig /usr/share/qemu-server/pve-q35.cfg -device 'usb-tablet,id=tablet,bus=ehci.0,port=1' -device 'vfio-pci,host=03:00.0,id=hostpci0.0,bus=ich9-pcie-port-1,addr=0x0.0,multifunction=on' -device 'vfio-pci,host=03:00.1,id=hostpci0.1,bus=ich9-pcie-port-1,addr=0x0.1' -device 'vfio-pci,host=00:1b.0,id=hostpci1,bus=ich9-pcie-port-2,addr=0x0' -device 'vfio-pci,host=00:14.0,id=hostpci2,bus=ich9-pcie-port-3,addr=0x0' -chardev 'socket,path=/var/run/qemu-server/105.qga,server,nowait,id=qga0' -device 'virtio-serial,id=qga0,bus=pci.0,addr=0x8' -device 'virtserialport,chardev=qga0,name=org.qemu.guest_agent.0' -iscsi 'initiator-name=iqn.1993-08.org.debian:01:f630670652' -drive 'if=none,id=drive-ide0,media=cdrom,aio=threads' -device 'ide-cd,bus=ide.0,unit=0,drive=drive-ide0,id=ide0' -netdev 'type=tap,id=net0,ifname=tap105i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on' -device 'virtio-net-pci,mac=B6:5F:55:91:78:39,netdev=net0,bus=pci.0,addr=0x12,id=net0' -rtc 'driftfix=slew,base=localtime' -machine 'type=q35' -global 'kvm-pit.lost_tick_policy=discard'' failed: exit code 1
 
So I took a stab at editing and running the command standalone, but it just hangs the shell, nothing happens, and the guest remains under 1% CPU usage with no display. There are no errors returned, but I still have to Ctrl-C and run 'qm stop 105' to recover. Here's the edited command that I used to launch the VM:

Code:
/usr/bin/kvm -id 105 -name amb47workstation -chardev 'socket,id=qmp,path=/var/run/qemu-server/105.qmp,server,nowait' -mon 'chardev=qmp,mode=control' -chardev 'socket,id=qmp-event,path=/var/run/qemu-server/105-event.qmp,server,nowait' -mon 'chardev=qmp-event,mode=control' -pidfile /var/run/qemu-server/105.pid -daemonize -smbios 'type=1,uuid=abd2ccad-4438-490d-aa22-0e74baf471cb' -drive 'if=pflash,unit=0,format=raw,readonly,file=/usr/share/pve-edk2-firmware//OVMF_CODE.fd' -drive 'if=pflash,unit=1,format=raw,id=drive-efidisk0,file=/dev/zvol/rpool/data/vm-105-disk-0' -smp '8,sockets=1,cores=8,maxcpus=8' -nodefaults -boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' -vga none -nographic -no-hpet -cpu 'host,+kvm_pv_unhalt,+kvm_pv_eoi,hv_vendor_id=proxmox,hv_spinlocks=0x1fff,hv_vapic,hv_time,hv_reset,hv_vpindex,hv_runtime,hv_relaxed,hv_synic,hv_stimer,kvm=off' -m 8192 -readconfig /usr/share/qemu-server/pve-q35.cfg -device 'usb-tablet,id=tablet,bus=ehci.0,port=1' -device 'vfio-pci,host=07:00.0,id=hostpci3,bus=ich9-pcie-port-4,addr=0x0,bootindex=50' -device 'vfio-pci,host=03:00.0,id=hostpci0.0,bus=ich9-pcie-port-1,addr=0x0.0,multifunction=on' -device 'vfio-pci,host=03:00.1,id=hostpci0.1,bus=ich9-pcie-port-1,addr=0x0.1' -device 'vfio-pci,host=00:1b.0,id=hostpci1,bus=ich9-pcie-port-2,addr=0x0' -device 'vfio-pci,host=00:14.0,id=hostpci2,bus=ich9-pcie-port-3,addr=0x0' -chardev 'socket,path=/var/run/qemu-server/105.qga,server,nowait,id=qga0' -device 'virtio-serial,id=qga0,bus=pci.0,addr=0x8' -device 'virtserialport,chardev=qga0,name=org.qemu.guest_agent.0' -iscsi 'initiator-name=iqn.1993-08.org.debian:01:f630670652' -drive 'if=none,id=drive-ide0,media=cdrom,aio=threads' -device 'ide-cd,bus=ide.0,unit=0,drive=drive-ide0,id=ide0' -netdev 'type=tap,id=net0,ifname=tap105i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on' -device 'virtio-net-pci,mac=B6:5F:55:91:78:39,netdev=net0,bus=pci.0,addr=0x12,id=net0' -rtc 'driftfix=slew,base=localtime' -machine 'type=q35' -global 'kvm-pit.lost_tick_policy=discard'
 
@dcsapak, Any other ideas as to what I might try? I would be willing to purchase a subscription or two if necessary to help sponsor development if you feel the bootindex change you proposed would get this working. Alternatively, I'm a software developer myself, so I may be able to submit a pull request myself if you can point me in the right direction. It would be nice to find a quick way to verify that this solution works before going to all of that trouble, however.
 
can you post the complete config and the commandline you try to execute? (preferably with the '--pretty' formatting)

i do not want to add such an option if it is not working really, and i currently have no spare nvme to test.

Alternatively, I'm a software developer myself, so I may be able to submit a pull request myself if you can point me in the right direction.
see https://pve.proxmox.com/wiki/Developer_Documentation for general developer info

and https://git.proxmox.com/?p=qemu-server.git;a=summary for the part that generates the qemu-commandline from the config (specifically the file 'PVE/QemuServer.pm')
 
I think I'm stuck at the same Problem. Is there a solution already implemented?

I'm passing a pcie nvme device through ... right now I manualy change the boot device everytime I start the VM.
 
Hi everyone,

i have the exact same issue.
Seems to be, that args gets procecced before the qm command line generates readconfig for q35.
Would actually be cool if someone could implement a quickfix (I'm not really a developer, more a sysadmin but would
try it too, because i need that feature to be stable as soon as possible)

So yeah, is there any update besides of that at early december?
 
Hi everyone,

a colleague of mine fixed the problem today, but because he deserves the credit for the fix he'll publish the solution by himself soon.

EDIT:

There seems to be a more basic problem.
If "args" is used instead of the hostpciX configuration in the qemu config file, the device gets not initialised correctly.
The device in /dev/vfio will not be created in that case which will prevent the VM from booting because it's missing the device.

I only have a fix for the args issue.
The problem seems to be in QemuServer.pm, so i'll try to debug and fix it. But honestly, i think the easiest fix would be to give hostpciX a parameter that marks the device as bootable. A entry in the webinterface for that case could be useful too.
Seems that there is no such option.
https://pve.proxmox.com/wiki/Manual:_qm.conf
 
Last edited:
Sorry, the content below does not work. Why? Because the cmdline from "qm showcmd 101" is different from what is executed when pushing the "start" Button in the Web-GUI.
"qm start 101" works fine thought.
Sooo, i'm at a dead end at the moment with my amount of knowledge.

Someone from PVE should have a look into this.

Here starts the mostly useless content:
Soooo, i fixed it. Kinda. It's a dirty and static solution, but it keeps the system running until the next update.
It's more like a Hotfix.

/usr/share/perl5/PVE/QemuServer.pm # I added the highlighted line because the nvme drive was configured as hostpci1 in my case.
If yours is hostpci0 you need to change the if($i ==1) to whatever number you gave the device.
You have to hardcode the vmid for this to work too (in my case 101)

With that the device gets initialised correctly after a reboot too.(Which will not happen if you configured it with args:)
Code:
my $j=0;

        foreach my $pcidevice (@$pcidevices) {

            my $id = "hostpci$i";
            $id .= ".$j" if $multifunction;
            my $addr = $pciaddr;
            $addr .= ".$j" if $multifunction;
            my $devicestr = "vfio-pci";
            if ($sysfspath) {
                $devicestr .= ",sysfsdev=$sysfspath";
            } else {
                $devicestr .= ",host=$pcidevice->{id}";
            }
            $devicestr .= ",id=$id$addr";

            if($j == 0){
                $devicestr .= "$rombar$xvga";
                $devicestr .= ",multifunction=on" if $multifunction;
                $devicestr .= ",romfile=/usr/share/kvm/$romfile" if $romfile;
                $devicestr .= ",bootindex=50" if($i == 1 && $vmid == 101);  # <------ THIS ONE!
            }
            push @$devices, '-device', $devicestr;
            $j++;
        }
    }
 
Last edited:
Sorry, the content below does not work. Why? Because the cmdline from "qm showcmd 101" is different from what is executed when pushing the "start" Button in the Web-GUI.
"qm start 101" works fine thought.
Sooo, i'm at a dead end at the moment with my amount of knowledge.

Someone from PVE should have a look into this.

Here starts the mostly useless content:
Soooo, i fixed it. Kinda. It's a dirty and static solution, but it keeps the system running until the next update.
It's more like a Hotfix.

/usr/share/perl5/PVE/QemuServer.pm # I added the highlighted line because the nvme drive was configured as hostpci1 in my case.
If yours is hostpci0 you need to change the if($i ==1) to whatever number you gave the device.
You have to hardcode the vmid for this to work too (in my case 101)

With that the device gets initialised correctly after a reboot too.(Which will not happen if you configured it with args:)
Code:
my $j=0;

        foreach my $pcidevice (@$pcidevices) {

            my $id = "hostpci$i";
            $id .= ".$j" if $multifunction;
            my $addr = $pciaddr;
            $addr .= ".$j" if $multifunction;
            my $devicestr = "vfio-pci";
            if ($sysfspath) {
                $devicestr .= ",sysfsdev=$sysfspath";
            } else {
                $devicestr .= ",host=$pcidevice->{id}";
            }
            $devicestr .= ",id=$id$addr";

            if($j == 0){
                $devicestr .= "$rombar$xvga";
                $devicestr .= ",multifunction=on" if $multifunction;
                $devicestr .= ",romfile=/usr/share/kvm/$romfile" if $romfile;
                $devicestr .= ",bootindex=50" if($i == 1 && $vmid == 101);  # <------ THIS ONE!
            }
            push @$devices, '-device', $devicestr;
            $j++;
        }
    }


Sorry for necroing this thread but this is the most recent result for searching "proxmox passthrough nvme boot".

I also have a dirty solution. Create a small 100MB-1GB virtio disk, boot to Windows (by hitting ESC while boot is happening and selecting the NVME drive), create a EFI partition in Windows on the 2nd virtio disk (I found instructions using diskpart, it wont let me post links but just google "create efi partition windows" and its the first thing that comes up), then use a utility like EasyUEFI to move the EFI from the NVMe drive to the virtio drive. Shut down the VM and set the virtio disk as the first item on boot order, and Windows should directly boot without intervention once again.

It's really dirty but hey, it works.
 
  • Like
Reactions: woloss
Sorry for necroing as well.

Sorry for necroing this thread but this is the most recent result for searching "proxmox passthrough nvme boot".

I also have a dirty solution. Create a small 100MB-1GB virtio disk, boot to Windows (by hitting ESC while boot is happening and selecting the NVME drive), create a EFI partition in Windows on the 2nd virtio disk (I found instructions using diskpart, it wont let me post links but just google "create efi partition windows" and its the first thing that comes up), then use a utility like EasyUEFI to move the EFI from the NVMe drive to the virtio drive. Shut down the VM and set the virtio disk as the first item on boot order, and Windows should directly boot without intervention once again.

It's really dirty but hey, it works.
Your solution worked just fine, until Proxmox 6.2. I'm not using my machine as 24/7 server, I'm turning it off for a night. For some reason, after 6.2, efi on that small virtio disk started to rewrite every reboot/power off.

Due to this, I was forced to find a solution. At the end, it turned out that changing boot order inside OVMF (ESC or F2 during VM boot) actually works. You need to change it there, and press "Reset". Don't try to open BIOS after that, otherwise it'll reset your custom order.
On initial try, I thought it's not saving, because every BIOS enter resulted in default boot order.
 
Found this thread earlier on and was about to post about the Windows EFI partition trick as suggested by @StackUnderflow . But I faced the same issues with @woloss where the disk would somehow reset, sometimes it would work and sometimes it wouldn't. Was reading around and found this thread.

https://forum.proxmox.com/threads/ovmf-uefi-windows-10-boot-option-wont-stick.27376/

He doesn't really explicitly tell you how to do it. But the answer is basically staring you in the face.

TL;DR

1. Create a new Windows VM and set it up as you like.
2. Go to Options and remove Network and CD-ROM from the boot options. Set to none and only leave the first option set to disk0

vmoptions.png

3. Delete both efi and sata/virtio/scsi disks in hardware. (Apparently the EFI disk is supposed to save the boot order but it never worked for me anyway.

vmhardware.png

Once you've done so, the VM should default boot into your NVMe drive that has been passed through.
 
TL;DR

1. Create a new Windows VM and set it up as you like.
2. Go to Options and remove Network and CD-ROM from the boot options. Set to none and only leave the first option set to disk0
3. Delete both efi and sata/virtio/scsi disks in hardware. (Apparently the EFI disk is supposed to save the boot order but it never worked for me anyway.

Once you've done so, the VM should default boot into your NVMe drive that has been passed through.

You, sir, are a genius! Thank you so very much.

The boot loader threw me a few errors (something about not being able to boot a USB drive - which was bootable, but anyway), then started loading succesfuly!
 
  • Like
Reactions: zanechua
can you post the complete config and the commandline you try to execute? (preferably with the '--pretty' formatting)

i do not want to add such an option if it is not working really, and i currently have no spare nvme to test.


see https://pve.proxmox.com/wiki/Developer_Documentation for general developer info

and https://git.proxmox.com/?p=qemu-server.git;a=summary for the part that generates the qemu-commandline from the config (specifically the file 'PVE/QemuServer.pm')

I am sure you found time to buy a nvme to test this as it is still a problem in proxmox 6.2 and two years have passed since the initial post was made..
 
You, sir, are a genius! Thank you so very much.

The boot loader threw me a few errors (something about not being able to boot a USB drive - which was bootable, but anyway), then started loading succesfuly!

I had issues with this as I had a 3TB drive connected as SCSI0. I edited the file manually to SCSI1 and the problem was solved. They key is to leave scsi0 as the only boot option in the GUI and then make sure you do not have a scsi0 in your hardware config in the GUI.
 
OP here, had two kids in the last two years and lost track of the current status on this. When I went back to look at this this week, I can see that with 6.2-15, there is a checkbox next to each potential boot device under the VM options. I migrated my boot drive over to the NVMe drive, shut the system down, enabled it as bootable under the options, and set it to boot first. From what I can tell, the system boots fine and this has held up through numerous host and guest reboots.

@dcsapak, do you know what version this functionality was added in? I know that it wasn't there in 5.3. At any rate, the changes in PVE over the last two years have been excellent. The GUI support for ACME DNS, Ceph, CephFS, Networking (hot config reload!) and others has been a huge time saver, and is much appreciated.

2020-11-25 (2).png
 
@dcsapak, do you know what version this functionality was added in?
it was just recently (i think sometime in oct)

The GUI support for ACME DNS, Ceph, CephFS, Networking (hot config reload!) and others has been a huge time saver, and is much appreciated.
thanks (in the name of all involved) :)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!