qm device del
but after the VM is shutdown, unbind the current driver (usually vfio-pci
) from the devices and bind the actual drivers. For example:echo "0000:01:00.0" > "/sys/bus/pci/devices/0000:01:00.0/driver/unbind" && echo "0000:01:00.0" > "/sys/bus/pci/drivers/amdgpu/bind"
echo "0000:01:00.1" > "/sys/bus/pci/devices/0000:01:00.1/driver/unbind" && echo "0000:01:00.1" > "/sys/bus/pci/drivers/snd_hda_intel/bind"
0000:01:00.0
etc.) and drivers (amdgpu
etc.) might be different.Yes, because for VMs the driver needs to be vfio-pci for passthrough, but for the Proxmox host you need the actual driver for the device to use it.1. Since the VMs are shutdown, do I still need to unbind the GPU from the VMs first? Because VM B was able to grab GPU from A without A unbinding it.
What driver is loaded for your GPU when you did not use passthrough? What does2. What is the equivalent of `/sys/bus/pci/drivers/amdgpu/bind` for Nvidia? I looked in /sys/bus/pci/drivers and did not find amdgpu or anything that sounds like nvidia. I do find vfio-pci though.
lspci -ks YOUR_PCI_ID
show as possible drivers? It's probably nouveau
.root@pve:~# lspci -ks 0000:08:00.0
08:00.0 VGA compatible controller: NVIDIA Corporation GM107 [GeForce GTX 745] (rev a2)
Subsystem: Hewlett-Packard Company GM107 [GeForce GTX 745]
Kernel driver in use: vfio-pci
Kernel modules: nvidiafb, nouveau
root@pve:~# locate
locate: no pattern to search for specified
root@pve:~# locate nvidiafb
/usr/lib/modules/5.15.104-1-pve/kernel/drivers/video/fbdev/nvidia/nvidiafb.ko
/usr/lib/modules/5.15.74-1-pve/kernel/drivers/video/fbdev/nvidia/nvidiafb.ko
root@pve:~# locate nouveau
/usr/lib/modules/5.15.104-1-pve/kernel/drivers/gpu/drm/nouveau
/usr/lib/modules/5.15.104-1-pve/kernel/drivers/gpu/drm/nouveau/nouveau.ko
/usr/lib/modules/5.15.74-1-pve/kernel/drivers/gpu/drm/nouveau
/usr/lib/modules/5.15.74-1-pve/kernel/drivers/gpu/drm/nouveau/nouveau.ko
root@pve:~#
/sys/bus/pci/drivers/nvdiafb
Or nouveau. Which one is loaded (in use) after a reboot of the host when not using passthrough and you do have a working host console?Code:root@pve:~# lspci -ks 0000:08:00.0 08:00.0 VGA compatible controller: NVIDIA Corporation GM107 [GeForce GTX 745] (rev a2) Subsystem: Hewlett-Packard Company GM107 [GeForce GTX 745] Kernel driver in use: vfio-pci Kernel modules: nvidiafb, nouveau root@pve:~# locate locate: no pattern to search for specified root@pve:~# locate nvidiafb /usr/lib/modules/5.15.104-1-pve/kernel/drivers/video/fbdev/nvidia/nvidiafb.ko /usr/lib/modules/5.15.74-1-pve/kernel/drivers/video/fbdev/nvidia/nvidiafb.ko root@pve:~# locate nouveau /usr/lib/modules/5.15.104-1-pve/kernel/drivers/gpu/drm/nouveau /usr/lib/modules/5.15.104-1-pve/kernel/drivers/gpu/drm/nouveau/nouveau.ko /usr/lib/modules/5.15.74-1-pve/kernel/drivers/gpu/drm/nouveau /usr/lib/modules/5.15.74-1-pve/kernel/drivers/gpu/drm/nouveau/nouveau.ko root@pve:~#
this is what I have... so just? I did locate the files physically though..Code:/sys/bus/pci/drivers/nvdiafb
Don't worry, this is not a chat and I'm not waiting for your reply.Yes, I have a working console. Let me check with a reboot. I have to disable my Windows to not auto start first.
One moment please.
What is the (exact) output ofYes, I have a text console on my monitor when Windows VM was not booted up. How do I tell which driver is being used? I ran `lspci -ks 08:00.0` and it showed both nouveau and nvidiafb like earlier logs.
lspci -ks 08:00.0
? What is the output of ls -l /sys/buis/pci/devices/0000:08:00.0/driver
?It is already bound to vfio-pci. I need to see this when you are not doing passthrough and the host console is still working (because that is the situation you want to restore).08:00.0 VGA compatible controller: NVIDIA Corporation GM107 [GeForce GTX 745] (rev a2)
Subsystem: Hewlett-Packard Company GM107 [GeForce GTX 745]
Kernel driver in use: vfio-pci
Kernel modules: nvidiafb, nouveau
root@pve:~# ls -l /sys/bus/pci/devices/0000:08:00.0/driver
ls: cannot access '/sys/bus/pci/devices/0000:08:00.0/driver': No such file or directory
root@pve:~# lspci -ks 08:00.0
08:00.0 VGA compatible controller: NVIDIA Corporation GM107 [GeForce GTX 745] (rev a2)
Subsystem: Hewlett-Packard Company GM107 [GeForce GTX 745]
Kernel modules: nvidiafb, nouveau
root@pve:~# ls -l /sys/bus/pci/devices/0000:08:00.0/
total 0
-r--r--r-- 1 root root 4096 Apr 26 22:53 ari_enabled
-r--r--r-- 1 root root 4096 Apr 26 22:53 boot_vga
-rw-r--r-- 1 root root 4096 Apr 26 22:53 broken_parity_status
-r--r--r-- 1 root root 4096 Apr 26 22:53 class
-rw-r--r-- 1 root root 4096 Apr 26 22:53 config
-r--r--r-- 1 root root 4096 Apr 26 22:53 consistent_dma_mask_bits
lrwxrwxrwx 1 root root 0 Apr 26 22:53 consumer:pci:0000:08:00.1 -> ../../../virtual/devlink/pci:0000:08:00.0--pci:0000:08:00.1
-r--r--r-- 1 root root 4096 Apr 26 22:53 current_link_speed
-r--r--r-- 1 root root 4096 Apr 26 22:53 current_link_width
-rw-r--r-- 1 root root 4096 Apr 26 22:53 d3cold_allowed
-r--r--r-- 1 root root 4096 Apr 26 22:53 device
-r--r--r-- 1 root root 4096 Apr 26 22:53 dma_mask_bits
-rw-r--r-- 1 root root 4096 Apr 26 22:53 driver_override
-rw-r--r-- 1 root root 4096 Apr 26 22:53 enable
lrwxrwxrwx 1 root root 0 Apr 26 22:53 firmware_node -> ../../../LNXSYSTM:00/LNXSYBUS:00/PNP0A08:00/device:24/device:25
lrwxrwxrwx 1 root root 0 Apr 26 22:53 iommu -> ../../0000:00:00.2/iommu/ivhd0
lrwxrwxrwx 1 root root 0 Apr 26 22:53 iommu_group -> ../../../../kernel/iommu_groups/26
-r--r--r-- 1 root root 4096 Apr 26 22:53 irq
drwxr-xr-x 2 root root 0 Apr 26 22:53 link
-r--r--r-- 1 root root 4096 Apr 26 22:53 local_cpulist
-r--r--r-- 1 root root 4096 Apr 26 22:53 local_cpus
-r--r--r-- 1 root root 4096 Apr 26 22:53 max_link_speed
-r--r--r-- 1 root root 4096 Apr 26 22:53 max_link_width
-r--r--r-- 1 root root 4096 Apr 26 22:53 modalias
-rw-r--r-- 1 root root 4096 Apr 26 22:53 msi_bus
-rw-r--r-- 1 root root 4096 Apr 26 22:53 numa_node
drwxr-xr-x 2 root root 0 Apr 26 22:53 power
-r--r--r-- 1 root root 4096 Apr 26 22:53 power_state
--w--w---- 1 root root 4096 Apr 26 22:53 remove
--w------- 1 root root 4096 Apr 26 22:53 rescan
--w------- 1 root root 4096 Apr 26 22:53 reset
-rw-r--r-- 1 root root 4096 Apr 26 22:53 reset_method
-r--r--r-- 1 root root 4096 Apr 26 22:53 resource
-rw------- 1 root root 16777216 Apr 26 22:53 resource0
-rw------- 1 root root 268435456 Apr 26 22:53 resource1
-rw------- 1 root root 268435456 Apr 26 22:53 resource1_wc
-rw------- 1 root root 33554432 Apr 26 22:53 resource3
-rw------- 1 root root 33554432 Apr 26 22:53 resource3_wc
-rw------- 1 root root 128 Apr 26 22:53 resource5
-r--r--r-- 1 root root 4096 Apr 26 22:53 revision
-rw------- 1 root root 131072 Apr 26 22:53 rom
lrwxrwxrwx 1 root root 0 Apr 26 22:53 subsystem -> ../../../../bus/pci
-r--r--r-- 1 root root 4096 Apr 26 22:53 subsystem_device
-r--r--r-- 1 root root 4096 Apr 26 22:54 subsystem_vendor
-rw-r--r-- 1 root root 4096 Apr 26 22:53 uevent
-r--r--r-- 1 root root 4096 Apr 26 22:53 vendor
-r--r--r-- 1 root root 4096 Apr 26 22:53 waiting_for_supplier
root@pve:~#
Hmm, there is no driver in use. Probably simplefb took over from UEFI/BIOS. Try binding nvidiafb or nouveau and see how it goes when you restart the console:This is what I get after a reboot, proxmox login prompt being displayed on my GPU's monitor, before I start Windows.
Hope it helps.
Bash:root@pve:~# ls -l /sys/bus/pci/devices/0000:08:00.0/driver ls: cannot access '/sys/bus/pci/devices/0000:08:00.0/driver': No such file or directory root@pve:~# lspci -ks 08:00.0 08:00.0 VGA compatible controller: NVIDIA Corporation GM107 [GeForce GTX 745] (rev a2) Subsystem: Hewlett-Packard Company GM107 [GeForce GTX 745] Kernel modules: nvidiafb, nouveau
Once the GPU device is unbound from both VMs, you can bind it back to the Proxmox host by running the following command:
#echo 1 > /sys/class/vtconsole/vtcon0/bind
This command binds the GPU device to the first virtual console of the Proxmox host.
If you have multiple GPUs or want to bind the GPU to a different virtual console, you can replace /sys/class/vtconsole/vtcon0 with the appropriate device path.
It's the second part after theOk thanks, mind sharing how I should bind nvidiafb/nouveau?
&&
(the first part is the unbind of vfio-pci):echo "0000:01:00.0" > "/sys/bus/pci/devices/0000:01:00.0/driver/unbind" && echo "0000:01:00.0" > "/sys/bus/pci/drivers/amdgpu/bind"
echo "0000:01:00.1" > "/sys/bus/pci/devices/0000:01:00.1/driver/unbind" && echo "0000:01:00.1" > "/sys/bus/pci/drivers/snd_hda_intel/bind"
Your PCI ID (0000:01:00.0
etc.) and drivers (amdgpu
etc.) might be different.
root@pve:~# echo "0000:08:00.0" > /sys/bus/pci/drivers/nouveau/bind
-bash: /sys/bus/pci/drivers/nouveau/bind: No such file or directory
root@pve:~# echo "0000:08:00.0" > /sys/bus/pci/drivers/nvidiafb/bind
-bash: /sys/bus/pci/drivers/nvidiafb/bind: No such file or directory
root@pve:~#
Maybe the drivers need to be loaded first (because they were no used before)? Try runningunbind works, but after that, I get this
Code:root@pve:~# echo "0000:08:00.0" > /sys/bus/pci/drivers/nouveau/bind -bash: /sys/bus/pci/drivers/nouveau/bind: No such file or directory root@pve:~# echo "0000:08:00.0" > /sys/bus/pci/drivers/nvidiafb/bind -bash: /sys/bus/pci/drivers/nvidiafb/bind: No such file or directory root@pve:~#
modprobe nouveau
before.