Advanced Setup with GPU Pass Through

Astraea

Renowned Member
Aug 25, 2018
208
30
68
I am back at it trying to set up a new PVE cluster using the following hardware:

1 x Dell 2950 Gen 3 (2 onboard NICs, 2 Quad PCIe NICs, and 1 Dual PCIe NIC)
2 x Mid Tower PCs with Intel i3 and 16GB ram (1 onboard NIC, 4 Single PCIe NICs)
1 x Full Tower PC with Intel i7 and 24 GB ram (1 onboard NIC, and 1 Quad PCIe NIC as well as a GTX1060 GPU and a GT 710 GPU)

Setting up the dell is straight forward with a standard PVE install and the 2 onboard NICs linked together for the management and storage (NFS) shares. 1 quad is used as a linked connection to my switch (assigned to PFSense - VM), the other has 3 ports linked to another switch for VM traffic, the dual NIC is used as my dual WAN interface (assigned to PFSense) and all is well.

The other 2 mid tower computers are set up almost the same way as the Dell with 2 NICs for management and storage and the other 3 for VM traffic.


The full tower PC, my is to have 2 NICs assigned for management and storage same as the other computers and the remaining 3 NICs for VM traffic. The trouble I run into is I want to run a Linux VM and a windows VM and have each get use of a GPU. But I am getting stuck on that part of the setup.

First I tried installing Debian with a desktop environment and then adding PVE but it would not boot once PVE was installed. So I then formatted and installed PVE normally and then added gnome into it which works pretty well and I have the 710 providing the video out at this point. my plan is to create an Ubuntu VM and VNC into it from the PVE desktop and full screen it to be my main desktop. Then I want to create a Windows VM with the 1060 GPU but when I pass it through to a VM the host locks up and I have to restart.

I am not sure what is going wrong, I have used this computer to pass through in the past but the GPUs were reversed.

I hope this provides enough background and where I am at now for the community to give me a hand with this. I have used PVE for a while now but still new when it comes to GPU pass through.

The end goal is the Dell will run a few VMs - PFSense, mailcow, nextcloud, unifi controller and 2 nginx reverse proxy servers. the other 2 mid towers will run some web servers.
 
I wanted to post an update on this. I got a three mode cluster up and working and it is running very well.

As for the desktop system I did manage to get it to pass through 2 GPUs to 2 different VMs once I added another GPU for PVE to use. However I did run out of PCIe lanes on the CPU and so I would have had to give up my PCIe NIC which was not going to work.

I also tried installing Ubuntu and using KVM with a GUI management tool and though I did get things working the performance and stability I experienced was not viable.

In the end I settled for a dual boot setup with each OS having their own SSD with the windows system getting additional SSDs for game storage.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!