New to Proxmox and set up AMD this way?

charleslcso

New Member
Oct 1, 2022
26
0
1
Hello all, I learned about Proxmox from Teresa (Morgonaut). I read in the forum there’s another post mentioning her video - the New Way to build a Hackintosh.

I don’t know much about virtualisation except running simple stuff like VMWare, VirtualBox, Parallels. So I guess Proxmox is a deep end for me.

I’m planning to build a new PC to run Ubuntu, MacOS, and Windows concurrently, just like what Teresa demonstrated in her video - the mouse cursor can travel between 3 monitors (1 monitor for Win, 1 for Mac, 1 for Linux running desktop).

MacOS would be the main daily driver (using integrated GPU). Windows for running 3D software (using a discrete video card 3080), and Ubuntu (using integrated GPU) for PHP, Python and React application development.

As AMD Zen4 and Ryzen 7000 just came out, I guess it is the obvious choice.
After roaming and learning from this forum, I believe I’m in similar position to this user and his question.

Therefore, I would like to learn more before jumping in and buying a suitable motherboard.

My preliminary choice of hardware would be:

- AMD 7950x
- RTX 3080 series
- 64GB RAM

Would someone make a recommendation on what to look for (or what shouldn’t buy) in motherboard and video cards?

Also which Proxmox documents to start reading before setting BIOS and Proxmox v7.2? I hope to get some starting point so speed up this learning process.

Much thanks!
 
Last edited:
With linux its usually better to buy hardware that is atleast a year old. Otherwise its not well tested and you can run into a lot of problems, instabilities and bad performance. And with PVE passthrough its very important to choose the right mainboard model, GPU model and chipset. Passing through iGPUs can be hard too.
I think nearly no one will have any experience running PVE on Zen3, so it would be hard to recommend any hardware.
Would be better to buy something that other people have verified to work fine with PCI passthrough.
 
Last edited:
  • Like
Reactions: leesteken
One problem is PCI passthrough with consumer hardware. Usually you only got 20 or 24 PCIe lanes where 4 are used for a M.2 slot, so only one 16x or 16x + 4x are usable. But ideally you want two electrical 16x slots directly connected to the CPU for two dedicated GPUs. iGPU would be best used for linux because often you only can define in BIOS if iGPU or PCIe GPU should be the primary GPU when booting. You then would need two PCIe slots directly connected to the CPU, each with its own IOMMU group. Usually PCIe slots connected to the chipset can't be used for passthrough as these devices would share a IOMMU group with all other hardware like SATA controller, NICs and so on.
When not using enterprise hardware, which got plenty of PCIe lanes, you would need to choose a mainboard and chipset that allows you to split the 16 PCIe lanes into two 8x lanes. So two mechanical PCIe 16x slots electrically connected as PCIe 8x directly to the CPU.
 
Last edited:
  • Like
Reactions: leesteken
Ideally (if this consumer board even exists) it is, at the minimum:

4 used for M.2 slot (1 SSD)
16x used for 1st GPU
16x used for 2nd GPU
2x used for SATA, NICs and so on

I am planning to have 1 dedicated SSD for each OS; that means 12x used for just M.2 SSD. 1 video card taking up 16x.

Are you saying I need 1 GPU for one OS? So 1 video card for MacOS, 1 video for Windows, and iGPU for Linux?

Reference board:
Asus ProArt Z690-CREATOR WIFI
* 2x PCIe 5.0/4.0/3.0 x16 slots
* 1x PCIe 3.0 x16 slot

Sapphire Nitro RX6900 XT SE
* PCIe 4.0 x16
 
Last edited:
Ideally (if this consumer board even exists) it is, at the minimum:

4 used for M.2 slot (1 SSD)
16x used for 1st GPU
16x used for 2nd GPU
2x used for SATA, NICs and so on
Won't exist as you only got up to 24 PCIe lanes in total. So all onboard stuff NICs, SATA controller, soundcard, ... as well as all slots (M.2 and all PCIe slots) need to share these 20 or 24 PCIe lanes.
Are you saying I need 1 GPU for one OS? So 1 video card for MacOS, 1 video for Windows, and iGPU for Linux?
Jup.
Reference board:
Asus ProArt Z690-CREATOR WIFI
* 2x PCIe 5.0/4.0/3.0 x16 slots
* 1x PCIe 3.0 x16 slot

Sapphire Nitro RX6900 XT SE
* PCIe 4.0 x16
Don't get confused with PCIe slots. If a mainboard got 6 PCIe 16x slots that doesn't mean they are usable. You can have a mechanical 16x slot that is only electically connected as a 4x. Or you got three PCIe 16x slots but that doesn't mean you can use three devices with 16x lanes each. Usually that means you can use one PCIe 16x slot with 16 lanes and the other 2 slots aren't usable. Or you just use two of the three PCIe 16x slots an then both slots will work with 8 lanes each and the third one isn't usable. Or you use all three PCIe 16x slots and then one will work with 8 PCIe lanes and the other two just with 4 lanes each.
And some slot can't be used at all for PCI passthrough because they are connected to the chipset and not to the CPU.

You really have to read the block diagrams, how each slot is internally connected...atleast if the manufacturer gives you access to that in the manual.
 
Last edited:
  • Like
Reactions: charleslcso
I guess I need to go down the Xeon path... with Supermicro motherboard.

Any further advice for this combo?

Going back to my observation stated in my post at the top, it is possible to run all OSes concurrently, each with a monitor, right?
 
Last edited:
Going back to my observation stated in my post at the top, it is possible to run all OSes concurrently, each with a monitor, right?
Only if you have a discrete GPU for each of them. I can recommend 6800(XT) GPUs (some lower 6000-serie GPUs don't reset properly) and RX5*0 (old and requires vendor-reset).
Or is there a GPU supported by Proxmox that can be shared over multiple VMs with separate monitor output? I have no knowledge of such a device.

I can recommend 5950X with X570S AERO G (Gigabyte allows you to select the boot GPU from all three slots and 2 of the 3 USB controllers can be passed through). You'll need a case that allows for more than 7 PCI brackets as the last slot is at the end of the motherboard. But the AM4-platform became low-end/obsolete this week, and maybe you want enterprise-class hardware anyway.
 
Last edited:
Or is there a GPU supported by Proxmox that can be shared over multiple VMs with separate monitor output? I have no knowledge of such a device.
As far as I know there is no Nvidia GPU that would support this. They either got no display outputs at all or they couldn't be used as output anymore when virtualizing it using vGPU. Not sure about AMD GPUs. Maybe that single one supporting SR-IOV could do that.
 
Only if you have a discrete GPU for each of them. I can recommend 6800(XT) GPUs (some lower 6000-serie GPUs don't reset properly) and RX5*0 (old and requires vendor-reset).
Or is there a GPU supported by Proxmox that can be shared over multiple VMs with separate monitor output? I have no knowledge of such a device.

I can recommend 5950X with X570S AERO G (Gigabyte allows you to select the boot GPU from all three slots and 2 of the 3 USB controllers can be passed through). You'll need a case that allows for more than 7 PCI brackets as the last slot is at the end of the motherboard. But the AM4-platform became low-end/obsolete this week, and maybe you want enterprise-class hardware anyway.
I'm trying to learn about the whole methodology. Still some way to go. Thank you for hanging there with me.

This board is very cheap now. Sound interesting for a Proxmox beginner like me!

Morgonaut said in her video, her set up uses only 1 Nvidia video card. She said we can run any number of VMs without a GPU, and only 1 OS with full video acceleration on 1 monitor. I think I understand, and somewhere not sure how to do this in Proxmox.

The 3D application on Windows support Nvidia CUDA only... AMD Raedon I believe is out of the question. May be, I don't know, that I can use 6800XT for MacOS, and RTX3080 for Windows on the X570S AERO G? Wait, everyone mentioned about PCIe passthrough for video, is it safe to bet that this combo should work, albeit on x8 bandwidth?

This is a lot more complicated that originally thought, and it is impossible to trial and error by buying various cards and motherboards... and 20x lanes is not enough to run multiple OSes concurrently.

If enterprise class solutions is the only way to achieve this, ASUS server motherboards support PCIe 3.0 only... mixing enterprise motherboard with consumer components is a viable solution?
 
Last edited:
I'm trying to learn about the whole methodology. Still some way to go. Thank you for hanging there with me.

This board is very cheap now. Sound interesting for a Proxmox beginner like me!

Morgonaut said in her video, her set up uses only 1 Nvidia video card. She said we can run any number of VMs without a GPU, and only 1 OS with full video acceleration on 1 monitor. I think I understand, and somewhere not sure how to do this in Proxmox.
Thats because a VM can't use any physical hardware the host got without passing it through using PCI passthough. And hardware passed through can't be used anymore by neither the host nor other VMs. So if you want GPU acceleration for your VMs you need a dedicated GPU for each VM. There is a hacky way to split a consumer Nvidia GPU into multiple virtual GPUs, where you usually would need to buy an expensive professional GPU and then pay for a vGPU license, but with virtual GPUs you only get the hardware acceleration but the GPU can't use the video outputs any longer.
So yeah, if you want 3 VM with video output you need 3 GPUs (or even 4 because you might need one for the PVE host too).
Same with all other hardware like soundcards and so on. So you need alot of PCIe slots.
What works fine are virtual NICs and virtual disks. Here you can work with virtual devices and share the physical NICs/SSDs that way.
USB is another thing. There is USB passthrough but it is slow and not that stable because its fully emulated. Works fine for a keyboard and mouse but if you got devices that need a lot of bandwidth, like a USB soundcard, USB capture card, USB HDD and so on you might want to buy PCIe USB controller cards for each VM and pass them through so each VM got direct access to the physical USB controller.
If enterprise class solutions is the only way to achieve this, ASUS server motherboards support PCIe 3.0 only... mixing enterprise motherboard with consumer components is a viable solution?
I would at least recommend ECC RAM, enterprise/datacenter SSDs and a UPS for data integrity.
 
  • Like
Reactions: charleslcso
I'm trying to learn about the whole methodology. Still some way to go. Thank you for hanging there with me.

This board is very cheap now. Sound interesting for a Proxmox beginner like me!
I'm happy for you that you can afford it so easily, but there are much cheaper consumer boards (that don't have an X570 chipset and cannot do as much passthrough).
Morgonaut said in her video, her set up uses only 1 Nvidia video card. She said we can run any number of VMs without a GPU, and only 1 OS with full video acceleration on 1 monitor. I think I understand, and somewhere not sure how to do this in Proxmox.
Yes, as many VMs as will fit in memory and they will use the (virtual) CPU to draw their virtual screens. Use PCIe passthrough to pass real hardware to VMs (with all its caveats).
The 3D application on Windows support Nvidia CUDA only... AMD Raedon I believe is out of the question. May be, I don't know, that I can use 6800XT for MacOS, and RTX3080 for Windows on the X570S AERO G? Wait, everyone mentioned about PCIe passthrough for video, is it safe to bet that this combo should work, albeit on x8 bandwidth?
I don't expect MacOS to support the latest GPUs as they are no using their own ARM-based hardware. I don't know if it supports NVidia, as I only remember people using old AMD GPUs. I guess you need to research this.
This is a lot more complicated that originally thought, and it is impossible to trial and error by buying various cards and motherboards... and 20x lanes is not enough to run multiple OSes concurrently.

If enterprise class solutions is the only way to achieve this, ASUS server motherboards support PCIe 3.0 only... mixing enterprise motherboard with consumer components is a viable solution?
If you need more lanes, you need to go Threadripper, EPYC or Xeon. Do you really need expensive new fast hardware or buy a second-hand enterprise server for your homelab?
 
  • Like
Reactions: charleslcso
I'm happy for you that you can afford it so easily, but there are much cheaper consumer boards (that don't have an X570 chipset and cannot do as much passthrough).

Yes, as many VMs as will fit in memory and they will use the (virtual) CPU to draw their virtual screens. Use PCIe passthrough to pass real hardware to VMs (with all its caveats).

I don't expect MacOS to support the latest GPUs as they are no using their own ARM-based hardware. I don't know if it supports NVidia, as I only remember people using old AMD GPUs. I guess you need to research this.

If you need more lanes, you need to go Threadripper, EPYC or Xeon. Do you really need expensive new fast hardware or buy a second-hand enterprise server for your homelab?

I read your other posts in the forum re 2nd hand machine to begin with.

Yes, 2nd hand machine is my approach now!

Thanks you and thx to Dunuin. This deep end is too steep for me indeed.

I found this machine online, 2nd hand, for USD60.

https://support.hpe.com/hpesc/public/docDisplay?docId=emr_na-c00712808

5 PCIe slots; 3x PCIe x4, 2x PCIe x8.

Can I throw a PCIe 4.0 card into it, or I should look for PCIe 3.0 video cards?
 
Last edited:
Why do you want such a virtual OS mess in one hardware box?

May be it'll be more usefull to have two or three different platform boxes with fast LAN and KVM switch between them?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!