Sanity Check - Installing Proxmox on Dell R730xd

wgshef

New Member
Jan 30, 2025
6
1
3
Hi all,
I'm getting ready to install Proxmox for the first time. This will be for my in-home network (also used for my work-from-home business). I've been reviewing the installation process for my server, and I just want to ensure that what I'm planning is the best way to do things. I want to keep things simple, but I also want to do it right the first time.

First, the server is a recently purchased Dell R730XD (https://www.ebay.com/itm/126039524117). I plan on using this server for the following:
1. Replace my current router (a TP-Link one) with either pfSense or OPNSense.
2. IDS / IPS (still investigating the open source options for this - any recommendations?)
3. DNS (PiHole), mainly for its ad-blocking capabilities, but also for allowing me to add my own things.
4. VPN (possibly), Haven't decided on Wireguard or OpenVPN.
5. SQL Express (installed in an Ubuntu container). Databases are small.
6. Home Assistant VM.
7. NAS? I'm not sure if I need dedicated software for this, or if I can just use the storage pools in ProxMox and/or the H730 Raid controller of the server. The main use is for me to not store work stuff on my laptop (not very intensive at all). The secondary use is for the backup of computers (3 laptops, plus whatever can be backed up from all of the containers / VMs above). The third use is for Media Server storage. If I do use NAS software, I'll use TrueNAS. It seems like ProxMox will handle what I want to do.
8. Media Server - Plex or JellyFin.

What I'm planning on:
1. Install two 1TB SSDs (Samsung 870EVO) in the two drive bays on the back of the server. Put into a Raid 1 configuration. Install ProxMox on these, and use for ProxMox and the containers / VMs that are running on ProxMox.
2. Install six 8TB HDDs (Seagate IronWolf) into six of the front drive bays, in a Raid 5 configuration. This would be for the NAS. This would give me ~40TB.
3. Install three 6TB HDDs (Seagate IronWolf) into three of the front drive bays, in a Raid 0 configuration. This would give me ~18TB and would be used for backing up the six HDDs. I'll expand this as my used space increases.

Questions that I have:
1. I've read throughout these forums to replace the H730 raid controller with something else. However, every YouTube video that I've watched installing ProxMox on an R730XD uses the H730 controller card. Furthermore, the H730 supports all 14 drives that are available on this configuration, however, the H330 only supports 8, so I would need two of these.
a. So, do I need to replace the H730 with the H330 card?
2. Do the two SSDs need to be on a separate controller card for ProxMox?
3. The server has 4 RJ45 ports - 2@1Gbs, and 2@10Gbs. I plan to pass three of these ports into the router - the two 1Gbs ports will be for my Wan connections (I have a DSL ISP for backup to my Starlink (soon to be fiber) and one of the 10Gbs ports will be going to the rest of my Lan.
a. Can I pass just three of the ports to either pfSense or OPNSense or do I have to pass all four via the entire card?
b. If I have to pass the entire card, will I need an additional NIC card for the server for ProxMox itself?
4. With planning to use IPS / IDS,
a. Should the Lan connection from the router go instead to the container running this software, and the output of that goes to the Lan via the third port above?
b. Can the virtual ethernet connectors be connected in this manner?
c. Or will I need yet another NIC card?
5. This server has two USB 3 ports. My Home Assistant (HA) VM will need 2 USB ports, but it doesn't need USB 3.
a. Would it be better to just install a USB card and pass that through to the HA VM?
b. Can individual ports be passed through to a VM or container, or does the entire card get passed through?
c. I'm considering adding a USB 3 port to enable connecting my 5-bay docking station for additional backup that can then be disconnected and locked up somewhere safe off-site.
6. Do I need to use TrueNas for what I'm planning, or will the server's H730 and/or ProxMox let me handle things?
7. I've seen that the installation requires specifying a FQDN. Should I use the <servername>.<domainname>.local format, or just <servername>.local? I'm leaning towards <servername>.myhome.local.
8. Based on what I've outlined, are there any other suggestions for how things should be done?

Thanks!
Wayne
 
I'm in the same boat as the OP here (newcomer to Proxmox, in fact - I just registered my Forums account and this is my very first reply to a post! :)).

The questions @wgshef asked here mirror many of the questions in my own mind (as I too will be installing on a Dell PowerEdge R730XD come next week!). Needless to say, when I found this post I was super excited, well, that is right up until seeing a grand total of zero replies :(

Any chance anyone is willing to share some insight, tips and tricks, words of wisdom, etc., found while on a similar journey? My R730XD has spent it's life so far as a Hyper-V host for a bunch of Windows VMs. We're in the process of getting the last VMs moved off this weekend and are hoping to move away from a Windows OS and use Proxmox instead so our DevOps folks can play with containers (and perhaps a few legacy Windows VMs here and there), on the same box, at the same time.

The first thing I'm wondering is with the RAID controller. For it's life so far as a Hyper-V host, as you might imagine, We had a few volumes for the OS and then a much larger Data volume where VMs were stored. Will that work for Proxmox, or is that not supported (or not best practice)?

Thanks in advance (and sorry for dredging up an old post - but in my own defense it really does mirror my own wonderings so well, and it never got any replies - so here's to hoping some folks here can help two folks our in one shot!)
 
Well, both posts are too vage. This forum works best for specific questions; OP asked for a concept, containing too many questions. There are just too many open details with more than one valid answer...

The first thing I'm wondering is with the RAID controller.

To be able to use ZFS you need have direct access to the drives. Either your HBA can be "switched" to non-Raid-mode via a firmware setting, or you may possibly flash it with "IT-firmware" or you need to replace it.
 
Well, both posts are too vage. This forum works best for specific questions; OP asked for a concept, containing too many questions. There are just too many open details with more than one valid answer...



To be able to use ZFS you need have direct access to the drives. Either your HBA can be "switched" to non-Raid-mode via a firmware setting, or you may possibly flash it with "IT-firmware" or you need to replace it.
Thanks for the reply, much appreciated. Here is an attempt at a bit more specific question then.

Given its my understanding (correct me if wrong), Proxmox will install and work with either an HBA in IT mode (for direct drive access, needed by ZFS) or with RAID mode, what are the pros and cons of each and what makes one better or worse than the other?

Are there there some scenarios/use-cases where you absolutely would want one way vs the other? On the flip side, are there some situations where it really wont matter which way you go, and if so, any examples of situations where it doesn't matter which you chose to do?

Thanks!
 
Last edited:
Given its my understanding (correct me if wrong), Proxmox will install and work with either an HBA in IT mode (for direct drive access, needed by ZFS) or with RAID mode, what are the pros and cons of each and what makes one better or worse than the other?

A classic Hardware-Raid with LVM or Ext2 on top can NOT give you all the goodies ZFS implements: guaranteed integrity, technically cheap snapshots, transparent compression, replication - to just name a few.

See also https://forum.proxmox.com/threads/f...y-a-few-disks-should-i-use-zfs-at-all.160037/

Are there there some scenarios/use-cases where you absolutely would want one way vs the other?

Personally I WANT all features of ZFS - as long as it is feasible. Especially the "integrity"-aspect is important for me. I have never lost a single bit of data since this journey started. On the other hand I have seen bitrot with dataloss on rotating rust and I saw SSDs dying from one second to another. Hardware-Raid fights these aspects, but...

Of course there are a lot of use cases which are fine without ZFS.

Examples: I've tried hard to implement Qubes-OS on top of ZFS - and failed. Most of my Laptops use the default Ext2fs (with LUKS). In my $dayjob I have several Dell servers with hardware Raid - w/o ZFS.

But everything I do with PVE (and most of my PBS') do utilize ZFS. I had several "workstations" installed with Ubuntu, because they delivered "Installation on ZFS" since a long time ago; meanwhile (for several years now) I use the PVE installation media, as it does this too. It is easier for me to disable the PVE stuff than injecting ZFS after installation of plain Debian. :-) And thanks to the "no-subscription"-repo I can do this for free...

Disclaimer, whatever I say: your mileage may vary!
 
I agree with Udo on "Use ZFS whenever possible" but it should be noted that HW RAID will propably be faster without having the advanced features of ZFS. For the OS install this doesn't really matter for your VM workload it might be worth to do benchmarking.
Hi all,
I'm getting ready to install Proxmox for the first time. This will be for my in-home network (also used for my work-from-home business). I've been reviewing the installation process for my server, and I just want to ensure that what I'm planning is the best way to do things. I want to keep things simple, but I also want to do it right the first time.

First, the server is a recently purchased Dell R730XD (https://www.ebay.com/itm/126039524117). I plan on using this server for the following:
1. Replace my current router (a TP-Link one) with either pfSense or OPNSense.
2. IDS / IPS (still investigating the open source options for this - any recommendations?)
3. DNS (PiHole), mainly for its ad-blocking capabilities, but also for allowing me to add my own things.
4. VPN (possibly), Haven't decided on Wireguard or OpenVPN.
5. SQL Express (installed in an Ubuntu container). Databases are small.
6. Home Assistant VM.
7. NAS? I'm not sure if I need dedicated software for this, or if I can just use the storage pools in ProxMox and/or the H730 Raid controller of the server. The main use is for me to not store work stuff on my laptop (not very intensive at all). The secondary use is for the backup of computers (3 laptops, plus whatever can be backed up from all of the containers / VMs above). The third use is for Media Server storage. If I do use NAS software, I'll use TrueNAS. It seems like ProxMox will handle what I want to do.
8. Media Server - Plex or JellyFin.

What I'm planning on:
1. Install two 1TB SSDs (Samsung 870EVO) in the two drive bays on the back of the server. Put into a Raid 1 configuration. Install ProxMox on these, and use for ProxMox and the containers / VMs that are running on ProxMox.

At the OP: Using the NVMEs as combined OS/VM disc should be one just prepare that you will have to replace them sooner or later due to the higher write load of PVE compared to a regular desktop OS. Obviouvsly using RAID1 on them make this easier. It might be worth it to have two different brands to make a paarlalel failure less propable.

2. Install six 8TB HDDs (Seagate IronWolf) into six of the front drive bays, in a Raid 5 configuration. This would be for the NAS. This would give me ~40TB.

This is ok for a NAS as long as you don't run VMs from them (VMs
3. Install three 6TB HDDs (Seagate IronWolf) into three of the front drive bays, in a Raid 0 configuration. This would give me ~18TB and would be used for backing up the six HDDs. I'll expand this as my used space increases.

RAID0 for backup doesn't sound like a good idea for me. Do you have plans for another backup copy on another media or offsite? Having the backup in the same server as the production means that you won't recover anything if the server as a whole gets toast/stolen etc. Please note that using HDDs for PBS won't give you great performance (especially housekeeping tasks like verify and garbage collection will take quite long) although this can be mitigated by using SSD for metadata storage: https://pve.proxmox.com/wiki/ZFS_on_Linux#sysadmin_zfs_special_device

The 3-2-1 rule is simple but effective in protecting important data from all sorts of threats, be it fires, natural disasters or attacks on your infrastructure by adversaries. In short, the rule states that one should create 3 backups on at least 2 different types of storage media, of which 1 copy is kept off-site.
https://pbs.proxmox.com/docs/storage.html#the-3-2-1-rule-with-proxmox-backup-server

1. I've read throughout these forums to replace the H730 raid controller with something else. However, every YouTube video that I've watched installing ProxMox on an R730XD uses the H730 controller card. Furthermore, the H730 supports all 14 drives that are available on this configuration, however, the H330 only supports 8, so I would need two of these.
a. So, do I need to replace the H730 with the H330 card?

It depends. If you want it as HW RAID you won't have to do anything. If you however plan to use ZFS (which I would recommend due to it's advanced feature set) then I would go with a plain HBA adapter. See this older thread for older discussion on H730:
2. Do the two SSDs need to be on a separate controller card for ProxMox?
Not if you want to use them for the OS. A seperate controller card is important if you want to run a NAS like TrueNAS, unRAID or OpenMediaVault as VM: Then you would need to passttrough the card to the VM.
3. The server has 4 RJ45 ports - 2@1Gbs, and 2@10Gbs. I plan to pass three of these ports into the router - the two 1Gbs ports will be for my Wan connections (I have a DSL ISP for backup to my Starlink (soon to be fiber) and one of the 10Gbs ports will be going to the rest of my Lan.
a. Can I pass just three of the ports to either pfSense or OPNSense or do I have to pass all four via the entire card?

I have no experience but @meyergru did a writeup in the OPNSense forum: https://forum.opnsense.org/index.php?topic=44159.0
Personally I wouldn't use a Server as a router for my home network, I stil lwant to have Internet if the server gets broken for some reason.

b. If I have to pass the entire card, will I need an additional NIC card for the server for ProxMox itself?

If there is no other Ethernet port on the server: Yes.

4. With planning to use IPS / IDS,
a. Should the Lan connection from the router go instead to the container running this software, and the output of that goes to the Lan via the third port above?
b. Can the virtual ethernet connectors be connected in this manner?
c. Or will I need yet another NIC card?

I will leave this to other people since I have no experience with such a setup (OPNSense as VM).
5. This server has two USB 3 ports. My Home Assistant (HA) VM will need 2 USB ports, but it doesn't need USB 3.
a. Would it be better to just install a USB card and pass that through to the HA VM?
b. Can individual ports be passed through to a VM or container, or does the entire card get passed through?
c. I'm considering adding a USB 3 port to enable connecting my 5-bay docking station for additional backup that can then be disconnected and locked up somewhere safe off-site.

See answer to 4.
6. Do I need to use TrueNas for what I'm planning, or will the server's H730 and/or ProxMox let me handle things?
It depends. Do you know how to setup a fileserver under Linux? Then you can just create a VM or container, create some virtual drives for them, create some NFS or Samba shares and you are done. If however you would prefer to have some kind of UI for managing things then you should use some NAS OS for it. You would need to passtrhrough the storage adapter to the NAS VM: https://www.truenas.com/community/r...guide-to-not-completely-losing-your-data.212/ There are also Linux container templates for them, which won't need this (so you can manage everything via the PVE UI), but will be less isolated than a VM:
https://www.turnkeylinux.org/fileserver
https://github.com/bashclub/zamba-lxc-toolbox
8. Based on what I've outlined, are there any other suggestions for how things should be done?

If you mainly plan to use the system as a NAS with some services it might be less hassle to go with a NAS OS with docker and VM support like TrueNAS Scale/unRAID or OpenMediaVault. If on the other hand you plan to use the system for learning things and have more than just a hand full of services ProxmoxVE will be of more use (at a cost in terms of invested time and other ressources).
 
  • Like
Reactions: UdoB
You can choose to use a virtualized NIC, in which case you can use any NIC selectively. Only if you pass thru (which I do not recommend) could you end up in a situation where the whole PCIE device must be passed to the VM.

Why I do not recommend passthrough:

1. FreeBSD is know to have problems with many NICs, esp. Realtek. Thus, having Linux doing the - potentially more mature - driver handling is a plus IMHO.

2. By having passthrough, you sacrifice snapshotting, which would be a good reason for doing OpnSense in a VM in the first place - otherwise, I would rather have a separate hardware.

3. By using a host bridge, you can even choose to make the VM believe it actually has more than one NIC - so, you do not need to configure router-on-a-stick in OpnSense. Heck, OpnSense and Proxmox can even share the same single NIC, which is out of the question using passthrough.

The speed difference is negligible in most cases.
 
Last edited:
  • Like
Reactions: Johannes S