Thinking about taking the leap

Asphyxiant

New Member
Oct 20, 2024
4
0
1
Hey everyone, I’ve used EXSI / XEN / Hyper-V in the past, so I’m familiar with Hypervisors, etc

The machine I would be installing Proxmox on would be:

Dell Precision 7865
Specs:
Ryzen Threadripper Pro 5945wx (12 Core / 24 Threads)
128GB DDR4-3200
NVIDIA RTX A4500 20GB ECC

Storage:
Onboard dual 2 TB m.2 NVME / RAID 1 (Samsung 990s)
Dell UltraSpeed Dual PCIE controller / RAID 0 (Samsung 990s)
4 SATA Seagate EXOS X20 installed in Hotswap bays

Network:
Onboard 10 Gbps (Marvel AQC113CS/Marvel AQC113

I do have 2 PCIE slots open:
1 x Full PCIE Gen 4 x16
1 x PCIE Gen4 x8

#Question 1
The BIOS has RAID enabled for the NVME/SATA ports. Would I need to set to ports back to AHCI?
- Would Proxmox have the drivers if I left the drives setup in RAID mode via the bios?
- Does proxmox do software raid if I was to put the operation of the drives back to AHCI?

#Question 2
Would proxmox have the driver for the 10gbe Ethernet controller?

#Question 3
Would proxmox have the drivers for the Quadro A4500?
- Would like to utilize vGPU / GPUV / GPU partitioning. Does Proxmox have the ability to do this? I assume it does simple pass through to the VM’s.

#Question 4
I think I already know the answer to this one, but I assume I would be disabling the TPM in the BIOS?

#Question 5
Would I need a dedicated GPU for Proxmox itself?

As for the setup of Proxmox:

2TB RAID 1:
Partition for Proxmox itself
Partition for storage of ISO’s

8TB RAID 0:
VM’s / VM disks

RAID 5 Array:
Storage for Plex content (Currently 30TB of content)

Network would be bridged to connected router

VM 1 - Plex (Ubuntu Server LTS)
VM 2 - Linux (Ubunu Server LTS) OS for network surveillance / services / etc
VM 3 - Windows 11 Pro - Remote Gaming VM (use parsec for connectivity) - Currently doing this via Hyper-V/Parsec. Can run most games @ 4k with about 120fps on average.

Is this the appropriate approach? Would the hardware I have be supported by Proxmox?

I apologize in advance is this is not the appropriate forum / place to ask these questions. I just want to dump Windows / Hyper-V, I’m constantly tired of windows causing issues with the VM’s I’m running. I would not be importing any of the VM’s, I would be starting over from scratch.

Thoughts? Ideas? Am I approaching this from the right angle? Does/Is anyone using a Dell Workstation 7865 with success with Proxmox?

Thank you for your time, and I look forward to any input / responses / suggestions / advice.

-Asphyxiant
 
Last edited:
Hey everyone, I’ve used EXSI / XEN / Hyper-V in the past, so I’m familiar with Hypervisors, etc

The machine I would be installing Proxmox on would be:

Dell Precision 7865
Specs:
Ryzen Threadripper Pro 5945wx (12 Core / 24 Threads)
128GB DDR4-3200
NVIDIA RTX A4500 20GB ECC

Storage:
Onboard dual 2 TB m.2 NVME / RAID 1 (Samsung 990s)
Dell UltraSpeed Dual PCIE controller / RAID 0 (Samsung 990s)
4 SATA Seagate EXOS X20 installed in Hotswap bays

Network:
Onboard 10 Gbps (Marvel AQC113CS/Marvel AQC113

I do have 2 PCIE slots open:
1 x Full PCIE Gen 4 x16
1 x PCIE Gen4 x8

#Question 1
The BIOS has RAID enabled for the NVME/SATA ports. Would I need to set to ports back to AHCI?
- Would Proxmox have the drivers if I left the drives setup in RAID mode via the bios?
- Does proxmox do software raid if I was to put the operation of the drives back to AHCI?

#Question 2
Would proxmox have the driver for the 10gbe Ethernet controller?

#Question 3
Would proxmox have the drivers for the Quadro A4500?
- Would like to utilize vGPU / GPUV / GPU partitioning. Does Proxmox have the ability to do this? I assume it does simple pass through to the VM’s.

#Question 4
I think I already know the answer to this one, but I assume I would be disabling the TPM in the BIOS?

#Question 5
Would I need a dedicated GPU for Proxmox itself?

As for the setup of Proxmox:

2TB RAID 1:
Partition for Proxmox itself
Partition for storage of ISO’s

8TB RAID 0:
VM’s / VM disks

RAID 5 Array:
Storage for Plex content (Currently 30TB of content)

Network would be bridged to connected router

VM 1 - Plex (Ubuntu Server LTS)
VM 2 - Linux (Ubunu Server LTS) OS for network surveillance / services / etc
VM 3 - Windows 11 Pro - Remote Gaming VM (use parsec for connectivity) - Currently doing this via Hyper-V/Parsec. Can run most games @ 4k with about 120fps on average.

Is this the appropriate approach? Would the hardware I have be supported by Proxmox?

I apologize in advance is this is not the appropriate forum / place to ask these questions. I just want to dump Windows / Hyper-V, I’m constantly tired of windows causing issues with the VM’s I’m running. I would not be importing any of the VM’s, I would be starting over from scratch.

Thoughts? Ideas? Am I approaching this from the right angle? Does/Is anyone using a Dell Workstation 7865 with success with Proxmox?

Thank you for your time, and I look forward to any input / responses / suggestions / advice.

-Asphyxiant
Did you have any luck with the Marvell AQC113? I am about to migrate to a board that has one.
 
#Question 1
The BIOS has RAID enabled for the NVME/SATA ports. Would I need to set to ports back to AHCI?
- Would Proxmox have the drivers if I left the drives setup in RAID mode via the bios?
- Does proxmox do software raid if I was to put the operation of the drives back to AHCI?
Proxmox does not support soft-RAID from motherboards; use AHCI and use a storage with redundancy instead.
#Question 2
Would proxmox have the driver for the 10gbe Ethernet controller?
I don't know. Maybe boot your system with a Ubuntu 24.04 LTS installer and see if it works (but you don't need to install Ubuntu).
#Question 3
Would proxmox have the drivers for the Quadro A4500?
- Would like to utilize vGPU / GPUV / GPU partitioning. Does Proxmox have the ability to do this? I assume it does simple pass through to the VM’s.
I don't know. NVidia drivers are not open source. You'll need to find NVidia drivers that are compatible with the Proxmox Linux kernel version. Maybe search the forum?
PCI(e) passthrough is always a bit of trial and error and searching and sometimes work-arounds. Review your IOMMU groups before-hand with the Ubuntu installer.
#Question 4
I think I already know the answer to this one, but I assume I would be disabling the TPM in the BIOS?
I believe Proxmox supports secure boot nowadays, if you want this. I don't know if your current system is locked to the current hypervisor or operating system.
#Question 5
Would I need a dedicated GPU for Proxmox itself?
No, but some kind of physical display output (or serial terminal) can be useful when installing and troubleshooting network issues. It's hardly ever needed.
As for the setup of Proxmox:

2TB RAID 1:
Partition for Proxmox itself
Partition for storage of ISO’s
Proxmox only needs 8GB for itself, so this feels wastefull unless you have lots of (huge) ISOs.
8TB RAID 0:
VM’s / VM disks
A ZFS mirror might give you better IOPS for your VM than a stripe.
RAID 5 Array:
Storage for Plex content (Currently 30TB of content)
Using the hardware RAID controller? Sounds fine, ZFS raidz1 is not really like RAID5 (with BBU).
 
Proxmox does not support soft-RAID from motherboards; use AHCI and use a storage with redundancy instead.

I don't know. Maybe boot your system with a Ubuntu 24.04 LTS installer and see if it works (but you don't need to install Ubuntu).

I don't know. NVidia drivers are not open source. You'll need to find NVidia drivers that are compatible with the Proxmox Linux kernel version. Maybe search the forum?
PCI(e) passthrough is always a bit of trial and error and searching and sometimes work-arounds. Review your IOMMU groups before-hand with the Ubuntu installer.

I believe Proxmox supports secure boot nowadays, if you want this. I don't know if your current system is locked to the current hypervisor or operating system.

No, but some kind of physical display output (or serial terminal) can be useful when installing and troubleshooting network issues. It's hardly ever needed.

Proxmox only needs 8GB for itself, so this feels wastefull unless you have lots of (huge) ISOs.

A ZFS mirror might give you better IOPS for your VM than a stripe.

Using the hardware RAID controller? Sounds fine, ZFS raidz1 is not really like RAID5 (with BBU).
Thanks for the response. I’ll give all this a shot and see. The big concern is the 10gbe NIC. As far as the AHCI/RAID, no biggy! Lastly, the GPU. NVIDIA does support Quadro on Linux, but was wondering if there is a vGPU module within Proxmox. Guess I’ll find out :).

Dell finally repaired it (on site tech broke the workstation 3x), and it’s on its way back. So hopefully over the holidays I’ll have time to dive deep into Proxmox and see how it goes.

Thanks again!
 
Did you have any luck with the Marvell AQC113? I am about to migrate to a board that has one.
Yeah I never got a chance to try it, as the workstation had issues which required Dell to literally replace everything the dipshit tech broke. Bent pins, snapped connectors off the motherboard…. Absolute nightmare.

I’ll give it a shot and see how it goes once I get the system back and let you know.
 
Yeah I never got a chance to try it, as the workstation had issues which required Dell to literally replace everything the dipshit tech broke. Bent pins, snapped connectors off the motherboard…. Absolute nightmare.

I’ll give it a shot and see how it goes once I get the system back and let you know.
Bummer. I did move over to an Asrock TRX50 board with the Aquantia chip yesterday. It seems to be functional thus far with the default drivers. I am only using it for the VM networks, but I can sustain 400+MBps off my NAS ZFS array from within the VMs. That's a limit on my array though, not the NIC. I haven't found a great iperf3 method of testing full bandwidth that yet.
 
Have you enabled jumbo frame? If you want full throughput, I’ve found enabling jumbo being a big help. But your switch / network also have to support jumbo frame if your pumping traffic over the network.

I have a synology 1522+ w/ a 10gbe NIC. I do have 500GB NVME caching installed which run in RAID 1, which then writes to the main BTFRS array which has 5 Seagate Exos X18 18TB drives in RAID 5.

Sustained transfer rates between my server Zeus, and my NAS WOPR, I can easily hit 10gbe speeds (with jumbo enabled).

Between the two devices, I have an Asus BE-98 Pro as the switch/router, and that also shows the full throughput of 10gbe.

MTU plays a huge role in Ethernet efficiency so that may be something to consider if you’re seeing bottlenecks across the network.
 
Have you enabled jumbo frame? If you want full throughput, I’ve found enabling jumbo being a big help. But your switch / network also have to support jumbo frame if your pumping traffic over the network.

I have a synology 1522+ w/ a 10gbe NIC. I do have 500GB NVME caching installed which run in RAID 1, which then writes to the main BTFRS array which has 5 Seagate Exos X18 18TB drives in RAID 5.

Sustained transfer rates between my server Zeus, and my NAS WOPR, I can easily hit 10gbe speeds (with jumbo enabled).

Between the two devices, I have an Asus BE-98 Pro as the switch/router, and that also shows the full throughput of 10gbe.

MTU plays a huge role in Ethernet efficiency so that may be something to consider if you’re seeing bottlenecks across the network.
Tbh, I don't know if it would make much of a difference with my hardware. I'm very likely limited by hard disk speed. It's a ZFS striped mirror of 6TB disks (3 2x2 vdevs). It's been a while since I benchmarked, but I'm getting pretty close to 3 times the single drive read speed and 2 times the single drive write speed.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!