Trying to get Optiplex GPU pass through working

djfreak

New Member
May 27, 2023
20
5
3
Greetings,

I performed the V8 upgrade on my 3 node cluster last night. I’m running 3 identical Dell Optiplex’s with a 3 drive ZFS pool (1 500GB ssd velcro’ed to each unit). The upgrade was very easy and V8 appears to be running perfectly for all of my HA CTs and VM’s and my ZFS pool appears healthy.

The reason I upgraded was in the hopes of getting resource mappings working from the Optiplex’s GPU‘s to be used by my Windows and Linux VM’s.

It’s almost working. I have the Proxmox GUI successfully showing “Mapping matches host data” with the green check for the GPU on node 3 however I’m still getting the dreaded “No IOMMU detected.”

I enabled all virtualization options that the Optiplex offers on its updated BIOS.

I’m ready to add IOMMU code to grub config files but everyone seems to have slightly different instructions on how to do so.

Has anyone gotten this working with these Optiplex’s?
 
It’s almost working. I have the Proxmox GUI successfully showing “Mapping matches host data” with the green check for the GPU on node 3 however I’m still getting the dreaded “No IOMMU detected.”
I don't know anything about your specific hardware but is it an Intel platform and is intel_iommu=on present in the output of cat /proc/cmdline? If so, then VT-d is not (fully) enabled in the motherboard BIOS or not support by the motherboard or CPU.
 
I don't know anything about your specific hardware but is it an Intel platform and is intel_iommu=on present in the output of cat /proc/cmdline? If so, then VT-d is not (fully) enabled in the motherboard BIOS or not support by the motherboard or CPU.
Thank you very much for your reply leesteken.

There have been many frustrating days and nights getting this cluster and all of its services working but last night was just fantastic.

Ok first, I got resource mapping working. I have my Optiplex’s HD 530 GPU as the main GPU for my Win10 Pro install. I couldn’t even believe it when it worked but it’s working. I did not have to blacklist the GPU but I did have to alter grub and modules as instructed and there is an extra virtualization checkbox in the BIOS you have to check but it‘s some unique Dell language. Making the PCI card work in the VM scrambled my Windows install which I still don’t understand but who cares because it forced me to build my Windows image from scratch on the new processor via Proxmox 8.0.3 and the performance is really excellent. Outstanding work Proxmox team.

I successfully added the GPU to my new Windows install and playing videos off of iTunes and Plex via a browser the two stress tests I use, are both behaving so much better. And this is on a $150 8 year old Dell Optiplex. Really excellent results.

Now I have a new problem and a new question leesteken or anyone else who might be able to help. So I have 3 identical Optiplexes. I’m going to reboot my other two nodes into BIOS and follow the exact same steps to deal with the IOMMU issue. However, I can’t name the GPU’s on all 3 nodes with the same name like we do with ZFS. I tried and it doesn’t work. I have to give each shared resource a unique name. How then will I make this beautiful Windows install migratable again as part of the HA cluster?
 
  • Like
Reactions: gilmour509
I did have to alter grub and modules as instructed and there is an extra virtualization checkbox in the BIOS you have to check but it‘s some unique Dell language.
I'm in the same boat as you were, days of trying to passthrough the GPU on a Edit: *"7050" to windows with no video output, other than windows recognizing it in the device manager. I can passthrough video in a test Ubuntu VM.

Would you be able to point me in the right direction with the extra virtualization checkbox in the bios, and share your grub modules with me? Thanks!
 
Last edited:
I'm in the same boat as you were, days of trying to passthrough the GPU on a 3050 to windows with no video output, other than windows recognizing it in the device manager. I can passthrough video in a test Ubuntu VM.

Would you be able to point me in the right direction with the extra virtualization checkbox in the bios, and share your grub modules with me? Thanks!
Did you add ALL hardware as a pooled resource for each node? I’m having trouble with this again after a cluster rebuild as well but I’ll try to help. What I’ve just done last night in starting from scratch, is I’m adding every single piece of hardware for all 3 of my optiplexes in pooled resources. The master checkbox at the top of the add window allows you to do this.

I’m stuck again on the IOMMU error though. I need to find the code I used to alter grub. I’ll post here again when I find it.

just had a thought. Did you try installing the virtio drivers again with the PCI device added? I’m going to try that if it fails.

I’ll post the grub and modules code later but it sounds like you may be past that.

I focused too much on the Optiplex BIOS. As long as you have all virtualization options on in the BIOS you should be fine. Most likely not the issue based on what I’ve found.
 
I'm in the same boat as you were, days of trying to passthrough the GPU on a 3050 to windows with no video output, other than windows recognizing it in the device manager. I can passthrough video in a test Ubuntu VM.

Would you be able to point me in the right direction with the extra virtualization checkbox in the bios, and share your grub modules with me? Thanks!
Did you add ALL hardware as a pooled resource for each node? I’m having trouble with this again after a cluster rebuild as well but I’ll try to help. What I’ve just done last night in starting from scratch, is I’m adding every single piece of hardware for all 3 of my optiplexes in pooled resources. The master checkbox at the top of the add window allows you to do this.

I’m stuck again on the IOMMU error though. I need to find the code I used to alter grub. I’ll post here again when I find it.

just had a thought. Did you try installing the virtio drivers again with the PCI device added? I’m going to try that if it fails.

I’ll post the grub and modules code later but it sounds like you may be past that.

I focused too much on the Optiplex BIOS. As long as you have all virtualization options on in the BIOS you should be fine. Most like;y not the issue based on what I’ve found.
Did you add ALL hardware as a pooled resource for each node? I’m having trouble with this again after a cluster rebuild as well but I’ll try to help. What I’ve just done last night in starting from scratch, is I’m adding every single piece of hardware for all 3 of my optiplexes in pooled resources. The master checkbox at the top of the add window allows you to do this.

I’m stuck again on the IOMMU error though. I need to find the code I used to alter grub. I’ll post here again when I find it.

just had a thought. Did you try installing the virtio drivers again with the PCI device added? I’m going to try that if it fails.

I’ll post the grub and modules code later but it sounds like you may be past that.

I focused too much on the Optiplex BIOS. As long as you have all virtualization options on in the BIOS you should be fine. Most likely not the issue based on what I’ve found.
Adding this for future reference

https://bobcares.com/blog/proxmox-gpu-passthrough/
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!