Hi everyone!
I just wanted to write a guide for those (like myself) trying out GPU passthrough with the HP ProDesk 600 G1 in 2025...
I needed to enable VT-d using the "Replicated setup" method (previously mentioned
here) in the UEFI/BIOS as the VT-d option was not present in the menus (GUI).
I'm also running the latest UEFI/BIOS firmware; 2.78 (as of writing). Make sure to update!
Steps for enabling VT-d using the "Replicated setup" method:
- Format a USB stick with FAT32 and insert it into the HP ProDesk 600 G1.
- Go to "Replicated setup" method under the File section in the UEFI/BIOS and choose to SAVE the file.
- Once you have saved the file to the USB stick, edit the file (see code section below for instructions).
- You can either remove the USB and edit the file on another computer or continue booting into your OS and edit the file.
- Make sure that the USB stick containing the edited text file (CPQSETUP.TXT) is properly inserted into (use the same USB slot if you previously removed the USB stick) the ProDesk 600 G1.
- Go to "Replicated setup" method under the File section in the UEFI/BIOS and choose to READ the file.
The following changes were made in the CPQSETUP.TXT file to enforce UEFI-only mode and disable legacy support.
Note that I don’t use PXE boot on my Proxmox host, which is why everything related to PXE has been disabled.
Code:
Virtualization Technology (VTx)
Disable
*Enable
Virtualization Technology Directed I/O (VTd)
Disable
*Enable
Legacy Support
*Disable
Enable
PXE Option ROMs
*Do Not Launch
UEFI Only
Legacy Only
Storage Option ROMs
Do Not Launch
*UEFI Only
Legacy Only
Video Option ROMs
--
*UEFI Only
Legacy Only
...and for those of you using GPU Passthrough in Proxmox with Linux VM(s), it's good practice to disable the loading of both
nouveau and other NVIDIA drivers on the
Proxmox host, especially if you are using an NVIDIA graphics card for GPU passthrough to your VM(s), which I am.
You can easily do this by doing the following:
Code:
echo -e "blacklist nouveau\nblacklist nvidia\nblacklist nvidiafb" \
| sudo tee /etc/modprobe.d/blacklist-gpu.conf
On the HP ProDesk 600 G1,
intel_iommu=on is required to enable VT-d.
For the second flag, I initially tested with
iommu=on while troubleshooting, since it forces all PCIe devices (including host-owned ones) through IOMMU translation.
That can be useful on OEM desktop firmware when validating DMAR behavior or dealing with quirks.
For normal Proxmox use, iommu=pt is preferable and matches the Proxmox documentation: host devices are treated as trusted and use identity mapping (lower overhead, fewer issues), while VFIO-assigned devices are still fully isolated via the IOMMU.
Rule of thumb: use
iommu=pt for a Proxmox host with fixed, trusted hardware; try
iommu=on temporarily for debugging or if you have untrusted or hot-plug PCIe devices.
In this context, “untrusted devices” means PCIe-capable hardware that can perform DMA but isn’t fixed or fully under the host’s control (e.g. Thunderbolt, hot-plug PCIe, external devices), not that the hardware itself is malicious.
I also added the following commands (including annotation to better understand what each command does) to my
/etc/modules file (
which can be read more about here):
Code:
# /etc/modules
#
# Kernel modules to load early at boot.
# Used to ensure critical subsystems are available before devices are initialized.
# VFIO core framework.
# Required for all PCI passthrough.
vfio
# IOMMU backend for VFIO.
# Enforces DMA isolation using VT-d / AMD-Vi.
vfio_iommu_type1
# VFIO PCI driver.
# Binds selected PCI devices (GPU) to vfio-pci instead of host drivers.
vfio_pci
# VFIO interrupt handling support.
# Optional on modern kernels but harmless to load.
vfio_virqfd
# Loop device support.
#
# Allows Linux to treat a file like a disk.
# Used for ISO files, Flatpak/Snap apps, and disk images.
# Loaded early so these features always work.
loop
Identify the hardware IDs for your PCI hardware you want to disable for the Proxmox host and add the following to
/etc/modprobe.d/vfio.conf:
Note (!) that
10de:0ffa and
10de:0e1b in the code snippet below are
specific to
MY Proxmox setup and WILL
NOT be the same for your setup, so please
change (!) these to represent
your system's preferred device ids for passthrough and early VFIO device claim!
Code:
# File: /etc/modprobe.d/vfio.conf
#
# This file configures kernel module options for vfio-pci.
# Files in /etc/modprobe.d/ are read at boot and when modules load.
# It is the correct place to tell Linux which PCI devices VFIO should claim.
# Bind specific PCI devices to vfio-pci at boot.
# This prevents the host (Proxmox) from loading normal GPU/audio drivers
# and ensures the devices are reserved for passthrough to a VM.
#
# Device IDs are in the format: vendor_id:device_id
# You can find them by running on the host:
#
# lspci -nn
#
# For the current case:
# 10de:0ffa → NVIDIA Quadro K600 (GPU)
# 10de:0e1b → NVIDIA HDMI audio function of the K600
#
# Both must be passed through together for correct GPU operation.
#
# disable_vga=1 disables legacy VGA arbitration for this device.
# VGA arbitration is how Linux decides which GPU may provide
# VGA/display access when multiple GPUs are present.
# Disabling it ensures the GPU is not shared with the host and
# is used exclusively by the VM during passthrough.
# This avoids conflicts, especially on older platforms or when
# the GPU is used as a primary display.
options vfio-pci \
ids=10de:0ffa,10de:0e1b \
disable_vga=1
After you've completed every step above, run the following commands on your Proxmox host:
update-grub
update-initramfs -u -k all
reboot
For the Proxmox VM(s), make sure to use the following settings:
- Use OVMF (UEFI) (required for GPU Passthrough)
- Machine: q35
- Use qemu-guest-agent (so that Proxmox can fetch useful information from the VM)
- VM Display is set to "No display" before booting the VM, otherwise you won't(!) be able to get GPU Passthrough running.
- Before you do this step, I would highly recommend installing and enabling the OpenSSH server daemon by enabling the ssh (Debian) / sshd (Fedora) service on the VM itself so you can access the VM via SSH instead of having to switch back and forth between using "SPICE" and console or "No display" and external monitor.
- If previously not installed, install the following package with your distro's package manager:
- Debian (using apt)
sudo apt update
sudo apt install -y openssh-server
sudo systemctl enable --now ssh
- Additionally, if you're using the OS-provided firewall, you must run the following command as well:
- Fedora
- Fedora Atomic Desktops (using rpm-ostree), for Atomic distros like Bazzite, Silverblue, Kinoite, etc...
sudo rpm-ostree install -y openssh-server
sudo systemctl reboot
sudo systemctl enable --now sshd
- Additionally, if you're using the OS-provided firewall, you must run the following commands as well:
sudo firewall-cmd --permanent --add-service=ssh
sudo firewall-cmd --reload
- Fedora (using dnf)
sudo dnf install openssh-server
sudo systemctl enable --now sshd
- Additionally, if you're using the OS-provided firewall, you must run the following commands as well:
sudo firewall-cmd --permanent --add-service=ssh
sudo firewall-cmd --reload
- Run the following command on the VM to ensure that the ssh / sshd service (OpenSSH server daemon) is enabled (start at boot) and running (started):
- Debian
sudo systemctl enable --now ssh
- Fedora:
sudo systemctl enable --now sshd
After all of this, I also added a USB device for the Proxmox VM, and selected the desired connected USB device with ease.
Good reads:
https://pve.proxmox.com/wiki/USB_Physical_Port_Mapping
https://pve.proxmox.com/wiki/PCI(e)_Passthrough
https://pve.proxmox.com/wiki/NVIDIA_vGPU_on_Proxmox_VE (if you need to share the dGPU with more than one VM, which I'll probably look into using in the near future when I buy a better graphics card for this machine)
I hope this can help someone in the future. 