PROXMOX on R720xd - Home Server Build - Looking for suggestions

Psilospiral

Member
Jun 25, 2019
38
11
13
53
Greetings Forum:

I'm new to PROXMOX. I have been experimenting with PROXMOX on a Dell R720xd, a H710 HBA flashed to IT mode, an internal PMR79 Dual SD Card module to store the OS, numerous 3TiB SAS drives, and 128GiB of ECC RAM. I run one QNAP NAS box for file storage, a media server (Plex via QNAP package), and to host numerous virtual machines via QNAP's Virtualization Station. I run another QNAP NAS box to rsync critical files for backup. I continuously run three separate VMs of Debian LXDE on the primary QNAP NAS, along with a few VMs of Windows, FreeBSD, and other flavors of Linux for occasional testing, learning, sandboxing, etc.. The continuously running VMs host a torrent seed box, a download box, as well as a Ubiquiti Controller for my UniFi setup.

With any more than three VMs running, my QNAP box chokes. Hence, I am now building the R720xd server to accomplish all of the above with a little more horsepower. While I'm at it, I want to include pfSense (or OPNsense) as a firewall/router appliance to take advantage of OpenVPN and Pi-Hole plugins. The requirement will also be a CIFS and NFS network share of a ZFS pool created on the bank of 3TiB SAS drives in the R720xd. I prefer to utilize 10 of the 12 available 3.5" SAS bays for a single pool of shared ZFS storage, keep 2 hot spares on standby, and host the OS from the internal PMR79 Dual SD Card module. The zpool would then supply all of the PVE VMs with storage as well provide network shares that all could reach....

I am leaning toward PROXMOX on the Dual SD Card module as the host OS, creating VMs for my Debian LXDE, Windows, and FreeBSD installations, and possibly running FreeNAS/XigmaNAS as a VM for shares. I have toyed with creating ZFS pools in PROXMOX, deleting them via CLI with pvesm remove, zpool destroy, etc for kicks to gain familiarity with ZFS on PROXMOX. I have tried FreeNAS on the R720xd and much prefer PROXMOX VE over the virtual environment within FreeNAS. It seems its coming down to a decision of which is more important: awesome VM power (PROXMOX) or ease of network shares (FreeNAS/XigmaNAS)...

Would anybody mind commenting on what they would suggest for topology in a single server build to accomplish all of the above? Your suggestions are greatly appreciated.

Thanks,
Psilo
 
Hey! Here's a cheatsheet (i'm using it on R620):
  1. forget about SD cards. Those are mostly for ESXi. Simply sell them if you need to. The Proxmox OS will kill the cards due to heavy IO.
  2. store VMs on a Fusion-io card. Grab one with around 1TB size (should be around $100) and you'll have a quick PCIe storage. This is a recommended model for 12th gen DELL servers.
  3. throw in 4-8 SAS/SATA drives in ZFS for OS + VM backup. You can get rid of H710 and get H310 (passthrough is more important and cache is not needed).
  4. use external storage for "cold" backups.
 
  • Like
Reactions: Psilospiral
Vladimir:

Thank you for the input. Another colleague suggested a Fusion-IO for my build. I just picked one up on eBay for $100 exactly, but it is 1.65 TiB! That should be plenty for my application - and maybe even some cache space for the RAIDZ2 pool.

Regarding the H710: I have a H710 mini monolithic 5CT6D with LSI 9207-8i P20 IT Mode (flashed with LSI SAS2308 firmware). Will this not achieve passthrough with PROXMOX?
 
@Psilospiral

Not sure. Never tried it myself - i simply sold the H710 and got H310. Seems safer and the lack of battery gives me peace of mind (i assume it makes it less likely to fail, although there is no basis for this, just my intuition).

As for the Fusion card, you've probably got ioscale2, correct? You will need some tinkering in order to set it up, but you can follow the info from here so that you don't waste time on it: https://forum.proxmox.com/threads/c...iodrive-and-ioscale-cards-with-proxmox.54832/ If you have ioscale2, you can basically copy-paste everything from that tutorial, since the drivers are the same.

Let me know if you need more help ;)
 
@Psilospiral
Let me know if you need more help ;)

Vladimir:

Yes, it is a Fusion-IO ioScale2 1.65TB PCIe SSD - F11-003-1T65-CS-0001. I may take you up on the help offer after receiving the Fusion-IO card! In the mean time, I will continue to work on H710 passthrough in my limited spare time. Thanks for all your help!

By the way, I have a H310 that came with the server, but is not flashed with IT firmware.....
 
@Psilospiral

You don't really need to flash it with anything. It has pass-through out of the box. As for the queue - it's decent enough for backups.
Just in case, here's the guide: https://www.vladan.fr/flash-dell-perc-h310-with-it-firmware/

If i were to give an advice, i'd argue that it's not worth it - the system may become less stable and there is no huge demand for it. If you were to use the ZFS for VMs with lots of IOPS, that would make sense. But if you'll be using the PCIe SSD for VMs, you're covered. The only bottleneck i've encountered so far was when the ZFS is when emptying the RAM from the ARC cache it may cause hanging of the system for a short period due to 120s timeout. Other than that - very stable and quite fast (i'm using a couple of Win10 and Ubuntu VMs).

Depending on the use case, you may want to add a couple of PCIe SSDs in the future and scale the amount of HDDs based on the amount of space needed for backups. You can also use the drives for slow VMs. The only bottleneck i foresee that you can hit with a R720 (of course depending on how CPU heavy are VMs) is the amount of RAM and network channel. Other than that, with 2 2630v2 onboard the only real constraint i have is the space available on PCIe SSD for fast VMs and RAM to feed the ZFS and VMs.
 
Greetings Vladimir:

Thank you for the H310 firmware flash link. I will attempt that later for kicks. I'd like to see if I can notice any performance difference between the PCI H310 and the PCIe H710 I am using now.

I certainly planned on utilizing ZFS, along with several continuously running VMs under PVE. At least one of the VMs (or containers) would sustain continuous IOPS as a seed box. I plan to run Debian LXDE for that environment.

I have been working on my PCIe passthrough issue for many days and wanted to make sure I made it pass that issue before moving forward to the Fusion-IO integration step. (Work has consumed all of my spare time here lately...) However, I am happy to report that I have just managed to achieve PCIe passthrough of the H710 HBA flashed with SAS2308 IT firmware! I now have access to all my SAS drives via FreeNAS webUI running on a PVE VM. I'm going to destroy my current test pools, reinstall PVE from scratch, and then document all of my installations steps soon. Along the way, I plan to complete some trial and error with ZFS pools constructed under PVE as well as under FreeNAS, toy with shares, then move into a production-use test phase.

Although I have not had the time to follow the steps in your tutorial post on getting my Fusion-IO card up and running for VMs, it has inspired me to write a step-by-step tutorial on PCIe passthrough for those wanting to run FreeNAS under PVE. I have found lots of posts with how-to questions, but few solid answers. I will be working on my tutorial soon...
 
PLEASE DO NOT USE THE GUIDE AT THAT LINK!!!! You will BRICK your H310 - if it’s the Mini Mono edition that connects to the special societ on the board!!!!

I recently did this with my own R720xd and the thing is, you have both the Pcie version (which most people use), but there is also the Mini Mono version with connects to a special socket on the R720xd. Several people have bricked their H310 Mini Mono by using the normal guide.

The issue is that you normally write a clean SBR onto the controller. Doing this for the Mini mono will make the server refuse to boot as the SBR area are where the card identifies itself to the system it’s in. Empty sbr = dell firmware will see the card as unsupported in ths slot and refuse to boot, as only dell cards may be connected to this slot. End result = bricked H310...

Solution is to alter the sbr file, a guy on reddit made a guide for this entire process and I can confirm it worked as I am now using my H310 Mini Mono with Freenas in the R720xd:
https://files.xbits.net:4430/zfs/h310_mini/H310MM_IT-2.pdf
 
  • Like
Reactions: Psilospiral
Greetings vRod:

Thank you for the warning. I have read where many users have run into issues with flashing IT firmware on their integrated mini mono H310 and H710 HBAs. I will study the issue closely and seek guidance before attempting the flash - I'm not there yet.

My H710 is also an integrated mini mono version. I purchase it on eBay pre-flashed... From my understanding, the H310 is PCI and the H710 is PCIe - both of my cards are integrated and do not occupy a riser slot...

There must be a way to flash a standard/stock SBR onto a bricked H310 to "recover" it and allow it to pass Dell's boot checks... Do you still have your bricked H310? FWIW, I have an untouched H310 mini mono, if you want to try I'm willing to help.....
 
Hey, sure no problem. I did not brick my own, as I followed the guide that I linked to before. At this time, I got a fully working H310 Mini Mono in my R720xd :) Sure I'd be happy to help, the guide really says it all though.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!