I am back at it trying to set up a new PVE cluster using the following hardware:
1 x Dell 2950 Gen 3 (2 onboard NICs, 2 Quad PCIe NICs, and 1 Dual PCIe NIC)
2 x Mid Tower PCs with Intel i3 and 16GB ram (1 onboard NIC, 4 Single PCIe NICs)
1 x Full Tower PC with Intel i7 and 24 GB ram (1 onboard NIC, and 1 Quad PCIe NIC as well as a GTX1060 GPU and a GT 710 GPU)
Setting up the dell is straight forward with a standard PVE install and the 2 onboard NICs linked together for the management and storage (NFS) shares. 1 quad is used as a linked connection to my switch (assigned to PFSense - VM), the other has 3 ports linked to another switch for VM traffic, the dual NIC is used as my dual WAN interface (assigned to PFSense) and all is well.
The other 2 mid tower computers are set up almost the same way as the Dell with 2 NICs for management and storage and the other 3 for VM traffic.
The full tower PC, my is to have 2 NICs assigned for management and storage same as the other computers and the remaining 3 NICs for VM traffic. The trouble I run into is I want to run a Linux VM and a windows VM and have each get use of a GPU. But I am getting stuck on that part of the setup.
First I tried installing Debian with a desktop environment and then adding PVE but it would not boot once PVE was installed. So I then formatted and installed PVE normally and then added gnome into it which works pretty well and I have the 710 providing the video out at this point. my plan is to create an Ubuntu VM and VNC into it from the PVE desktop and full screen it to be my main desktop. Then I want to create a Windows VM with the 1060 GPU but when I pass it through to a VM the host locks up and I have to restart.
I am not sure what is going wrong, I have used this computer to pass through in the past but the GPUs were reversed.
I hope this provides enough background and where I am at now for the community to give me a hand with this. I have used PVE for a while now but still new when it comes to GPU pass through.
The end goal is the Dell will run a few VMs - PFSense, mailcow, nextcloud, unifi controller and 2 nginx reverse proxy servers. the other 2 mid towers will run some web servers.
1 x Dell 2950 Gen 3 (2 onboard NICs, 2 Quad PCIe NICs, and 1 Dual PCIe NIC)
2 x Mid Tower PCs with Intel i3 and 16GB ram (1 onboard NIC, 4 Single PCIe NICs)
1 x Full Tower PC with Intel i7 and 24 GB ram (1 onboard NIC, and 1 Quad PCIe NIC as well as a GTX1060 GPU and a GT 710 GPU)
Setting up the dell is straight forward with a standard PVE install and the 2 onboard NICs linked together for the management and storage (NFS) shares. 1 quad is used as a linked connection to my switch (assigned to PFSense - VM), the other has 3 ports linked to another switch for VM traffic, the dual NIC is used as my dual WAN interface (assigned to PFSense) and all is well.
The other 2 mid tower computers are set up almost the same way as the Dell with 2 NICs for management and storage and the other 3 for VM traffic.
The full tower PC, my is to have 2 NICs assigned for management and storage same as the other computers and the remaining 3 NICs for VM traffic. The trouble I run into is I want to run a Linux VM and a windows VM and have each get use of a GPU. But I am getting stuck on that part of the setup.
First I tried installing Debian with a desktop environment and then adding PVE but it would not boot once PVE was installed. So I then formatted and installed PVE normally and then added gnome into it which works pretty well and I have the 710 providing the video out at this point. my plan is to create an Ubuntu VM and VNC into it from the PVE desktop and full screen it to be my main desktop. Then I want to create a Windows VM with the 1060 GPU but when I pass it through to a VM the host locks up and I have to restart.
I am not sure what is going wrong, I have used this computer to pass through in the past but the GPUs were reversed.
I hope this provides enough background and where I am at now for the community to give me a hand with this. I have used PVE for a while now but still new when it comes to GPU pass through.
The end goal is the Dell will run a few VMs - PFSense, mailcow, nextcloud, unifi controller and 2 nginx reverse proxy servers. the other 2 mid towers will run some web servers.