New installation, hardware recommendation

ProxStarter

New Member
May 29, 2025
6
1
3
Hey all!
New guy around here. I come from Spain and I've been working in IT for the last 15 years, mainly in Windows and VMware ESXi systems.
Now thanks to Broadcomm, ESXi is out of the window, and here I am.

TL;DR: I'm looking for advice to build storage server to act as backup repository for my company, this is not a homelab question. So I'm debating myself whether to go with something I know or something new. I can find HPE DL380 G10 or Dell R740xd servers as the foundation of the system, but I'm struggling whith the storage adapters and all the HBA/RAID mess.

Long story:
The company I work for as the IT guy need to grow its storage capacity urgently, we work with a single file server holding 20 TB of data and I need to back that thing up.
So I've proposed a new server to act as a repository revolving around Veeam and its hardened repository, this will be in the realm of 60-80 TBs

Now, Veeam works in Windows, the hardened repository works in linux, and here comes issue 1, I need to virtualize the thing, use a VM for the Windows Server machine that will hold the Veeam software and another one that will be the linux repository.
So issue 1: Use proxmox or use HyperV. I'm inclined to go with proxmox given the good reviews around, but that's new to me.
If it was HyperV I know how to follow, RAID card, create volumes, and store the VMs in the different pools (ssds for OS, HDDs for storage), call it a day.

Now, being proxmox I've read hundreds of posts about the HBA/RAID controller issue and I'm at loss. I need to pick this thing from one place, and not DIY components from eBay. And here comes issue 2
Issue 2: If I buy the server with a RAID controller, then I don't know if I can or cannot try the Proxmox route, but if I buy the server with an HBA, I know for sure that I can't make it work with Windows HyperV. Yes I know that Windows can do dynamic disks, but I'd rather sit on a kerosene barrel with a candle.

In the HPE Team I have the HP P816i-a (4GB) controller
In the Dell Team I have the HBA330 and the H740p (8GB)
I might be able to throw an LSI SAS 9305-16i HBA, but again, I need to know how is it going to work.

Both options will be populated with SATA HDDs for capacity and at least 2 SSDs for the OS, swap, etc. This will be setup with double parity RAID6 or RAIDZ2

Rest of the part-list is similar in both cases, 1 or 2 procs below the 16 core count in total, at least 128GBs of ECC ram, 9 or 10 x 10TB drives.

So right now, I'm more inclined about the Dell H740p options based on some posts: https://forum.proxmox.com/threads/using-raid-in-hba-mode-or-remove-raid.163468/ or https://www.truenas.com/community/threads/hp-dell-hba-discussion-again.114252/post-791458
Note that I will not be running TrueNAS (although I'd like to) and that no hardware has been purchased yet.

Any pointers will be much appreciated.

Thank you all and apologies for this long introduction.
 
Are you planning to use a cluster and Ceph?

If not are you planning to use ZFS software mirror?

Those are the two things that don't work well with hardware RAID.

Note depending on the controller they can often be set as needed, for instance, we've reused some older hardware by setting each disk to "JBOD" in the (LSI) RAID controller.

In a vacuum I'd say use what you and/or others there are familiar with.

You didn't specifically mention it but Windows has Storage Spaces as well, for expandable/redundant storage.
 
i'm not an expert with this but the main issue seems to be the os / booting part. on certain systems grub or systemd-boot can't find the kernel / initrd / os when installed on a raid array. to circumvent this use a HPE NS204i / BOSS card (Hardware Raid 1, 2x 480GB NVMe Drives) for the pve os install.

if you buy / use one of the newer raid controllers (HPE Gen11++?) they have Mixed Mode (RAID & HBA at the same time), if you don't configure a raid array the drives should be passed through directly (in theory). ask your hardware partner about this just to be sure.

depending on your needs you can use zfs (raid controller hba mode) or create a raid array on the controller and use it in pve as lvm-thin, both options have their pros and cons.
 
  • Like
Reactions: ProxStarter
Are you planning to use a cluster and Ceph?

If not are you planning to use ZFS software mirror?

Those are the two things that don't work well with hardware RAID.

Note depending on the controller they can often be set as needed, for instance, we've reused some older hardware by setting each disk to "JBOD" in the (LSI) RAID controller.

In a vacuum I'd say use what you and/or others there are familiar with.

You didn't specifically mention it but Windows has Storage Spaces as well, for expandable/redundant storage.
Hey thanks for the reply,
Nope, no clustering, just a single node, at least for now.

If not are you planning to use ZFS software mirror?
To be honest, I started to hear about ZFS when I investigated about TrueNAS, I've read so many positive things about it, and I thought that was "the" filesystem for Proxmox as well, is there other way? (did I say I know nothing about Proxmox?)

In a vacuum I'd say use what you and/or others there are familiar with.
Hmm yes, that's part of my issue as I said, I really want to explore Proxmox/ZFS and the opportunities it brings to the table, but I can't tinker with it as much as I'd like, this should be as solid as possible, and I'm the one to support it.
You didn't specifically mention it but Windows has Storage Spaces as well, for expandable/redundant storage.
True, I forgot about that. I've never used SS, I'd love to have some caching available for this setup, I'm going to be moving 10 TBs worth of backups regularly and I'd like it to be fast, but full flash is a bit over budget.
 
i'm not an expert with this but the main issue seems to be the os / booting part. on certain systems grub or systemd-boot can't find the kernel / initrd / os when installed on a raid array. to circumvent this use a HPE NS204i / BOSS card (Hardware Raid 1, 2x 480GB NVMe Drives) for the pve os install.

if you buy / use one of the newer raid controllers (HPE Gen11++?) they have Mixed Mode (RAID & HBA at the same time), if you don't configure a raid array the drives should be passed through directly (in theory). ask your hardware partner about this just to be sure.

depending on your needs you can use zfs (raid controller hba mode) or create a raid array on the controller and use it in pve as lvm-thin, both options have their pros and cons.
Interesting, I didn't think about booting the thing, I've worked with BOSS cards and I considered them a waste of space, they were a "must" with the latest versions of ESXi, but then again, not VMware anymore. My plan was to use a couple of mirrored SSDs for OS, but now that I think about it, who's going to do the mirror? Easy enough with a raid card, mounting the volumes but I've got no experience with HBAs

Guess I could probably connect the 2 SSDs directly to the motherboard and raid it there, use that to create the boot volume?

depending on your needs you can use zfs (raid controller hba mode) or create a raid array on the controller and use it in pve as lvm-thin, both options have their pros and cons.
Cool, I will have a look.

So it seems like I'm leaning towards the Dell and the H740p combination. And "hope" that I can at least try the HBA mode and mount the system as zfs, as a backup plan I can always do the good-ol-raid way as it's something I have more experience on.

Thank you for your help.

If anything else comes to mind be sure to update me!
 
you don't need a boss card but then you should make sure you can boot from the device you installed pve on.

Code:
Guess I could probably connect the 2 SSDs directly to the motherboard and raid it there, use that to create the boot volume?

that's certainly an option, if you install pve with a zfs mirror on this two ssd's it should work. just be aware despite all the good things about zfs it will wear out your ssd's faster than another file system - use enterprise grade ssd's with plp.

pve boot tool will "sync" the boot loader onto both ssd's if one die's you can boot from the other, you just have to enable both ssd's in bios for boot.

more info on pve zfs

and the official documentation
 
  • Like
Reactions: Johannes S
Hey I'm still building the part list and trying to figure everything out. It seems like I should be able to do a zfs mirror for the boot drive with during the installation according to this: https://youtu.be/fymXozAXoyQ?feature=shared&t=12

My current storage layout looks like this:

StorageDesign.png

This schema gives me a backup plan that is compatible with either a raid card and an HBA card, so that's a plus for me.

The boot drive may be comprised of consumer drives, the rest are enterprise oem certified drives, the plan is to use:
- First mirror: Boot volume, ISO storage, OS drive for other VMs (windows for veeam, ubuntu for hardened repository, maybe (probably) a PiHole)
- SSD RAID5: Cache, metadata, veeam catalog, vss or scratch disk. I have some doubts about this one, I'd appreciate some suggestions here.
- HDD RAID6: Bulk storage, not much to discuss here, this will be where the backups are stored. I don't want to risk URE with such large drives, therefore double parity. I could go with RAID10, but at that capacity that's expensive.

This is still in the drawing board and as I explained, I know very little about proxmox and zfs, I've been reading about it for a couple of weeks and still find it hard to get a clear understanding of the metadata vdevs, L2ARC, etc.

Thank you for your help!
 
Last edited:
View attachment 86688

This schema gives me a backup plan that is compatible with either a raid card and an HBA card, so that's a plus for me.

The boot drive may be comprised of consumer drives, the rest are enterprise oem certified drives, the plan is to use:
- First mirror: Boot volume, ISO storage, OS drive for other VMs (windows for veeam, ubuntu for hardened repository, maybe (probably) a PiHole)

I would rethink this part. Different to older ESX versions or pfsense/opensense ProxmoxVE is NOT designed to be run from a flash drive in a kind of read-only mode. The operating system writes a lot to the operating system drive (logging data, configuration database of proxmox cluster file system ( the sqlite file contents are used to create the configuration file on a RAM disk mounted to /etc/pve, even if you don't have a cluster see https://pve.proxmox.com/wiki/Proxmox_Cluster_File_System_(pmxcfs) ) and the RRD files for the metric dashboards. So consumer SSDs will fail way earlier than dc ssds with power-loss-protection. The forum has a lot of discussions on that subject:
https://forum.proxmox.com/search/8416015/?q=wearout+consumer+ssd+disk&o=date

Now for the operating system you don't need a SSD, even a HDD will propably be fine. But if you want to save VMs on the same storage this won't fly since HDDs won't be great for VM performance.

For your questions concerning veeam I lack the necessary experience with running veeam on PVE, so I hope somebody else can help you :)

- SSD RAID5: Cache, metadata, veeam catalog, vss or scratch disk. I have some doubts about this one, I'd appreciate some suggestions here.
- HDD RAID6: Bulk storage, not much to discuss here, this will be where the backups are stored. I don't want to risk URE with such large drives, therefore double parity. I could go with RAID10, but at that capacity that's expensive.

One thing though concerning ZFS versus HW RAID: If you want to use HW RAID, you shouldn't use ZFS since ZFS SW RAID and HW RAID doesn't play well together. And RAIDZ (ZFS lingo for RAID5/6 and friends) isn't good for VM performance (might still be good enough for data of course)): https://forum.proxmox.com/threads/fabu-can-i-use-zfs-raidz-for-my-vms.159923/
Striped mirrors (ZFS lingo for RAID10) will give you better performance at the cost of capacity and data safety (like with classical RAID10 versus RAID5/6)

Another thing to consider: If you HW RAID controller has a battery-powered cache RAID5/6 will propably perform better than ZFS RAIDZ (at least according to the threads I read on this subject hear, again I lack real word experience). On the other hand ZFS has several features your HW RAID controller won't have (like on the fly compression) see https://forum.proxmox.com/threads/f...y-a-few-disks-should-i-use-zfs-at-all.160037/ for some insights on it.

Since you are still in the planning stage you could also do a little bit of benchmarking: First setup the variant with the HW RAID controller, test and benchmark everything, write down the results. Then wipe everything and setup again with a ZFS-based setup, redo the tests and benchmarks and compare the results.

The biggest benefit of ZFS are its flexibility and additional features but they come with some costs which might not be worth it for you.

If you don't use ZFS you would use instead LVM-thin or LVM, see https://pve.proxmox.com/wiki/Storage for more informations on the different storage possibilities ( there are also link to the documentation on LVM, LVM-thin and ZFS)
 
I would rethink this part. Different to older ESX versions or pfsense/opensense ProxmoxVE is NOT designed to be run from a flash drive in a kind of read-only mode. The operating system writes a lot to the operating system drive (logging data, configuration database of proxmox cluster file system ( the sqlite file contents are used to create the configuration file on a RAM disk mounted to /etc/pve, even if you don't have a cluster see https://pve.proxmox.com/wiki/Proxmox_Cluster_File_System_(pmxcfs) ) and the RRD files for the metric dashboards. So consumer SSDs will fail way earlier than dc ssds with power-loss-protection. The forum has a lot of discussions on that subject:
https://forum.proxmox.com/search/8416015/?q=wearout+consumer+ssd+disk&o=date

Now for the operating system you don't need a SSD, even a HDD will propably be fine. But if you want to save VMs on the same storage this won't fly since HDDs won't be great for VM performance.

For your questions concerning veeam I lack the necessary experience with running veeam on PVE, so I hope somebody else can help you :)



One thing though concerning ZFS versus HW RAID: If you want to use HW RAID, you shouldn't use ZFS since ZFS SW RAID and HW RAID doesn't play well together. And RAIDZ (ZFS lingo for RAID5/6 and friends) isn't good for VM performance (might still be good enough for data of course)): https://forum.proxmox.com/threads/fabu-can-i-use-zfs-raidz-for-my-vms.159923/
Striped mirrors (ZFS lingo for RAID10) will give you better performance at the cost of capacity and data safety (like with classical RAID10 versus RAID5/6)

Another thing to consider: If you HW RAID controller has a battery-powered cache RAID5/6 will propably perform better than ZFS RAIDZ (at least according to the threads I read on this subject hear, again I lack real word experience). On the other hand ZFS has several features your HW RAID controller won't have (like on the fly compression) see https://forum.proxmox.com/threads/f...y-a-few-disks-should-i-use-zfs-at-all.160037/ for some insights on it.

Since you are still in the planning stage you could also do a little bit of benchmarking: First setup the variant with the HW RAID controller, test and benchmark everything, write down the results. Then wipe everything and setup again with a ZFS-based setup, redo the tests and benchmarks and compare the results.

The biggest benefit of ZFS are its flexibility and additional features but they come with some costs which might not be worth it for you.

If you don't use ZFS you would use instead LVM-thin or LVM, see https://pve.proxmox.com/wiki/Storage for more informations on the different storage possibilities ( there are also link to the documentation on LVM, LVM-thin and ZFS)
Thanks a ton for such ammount of information. I'll go into it and post what I find.

My interest in ZFS comes from threads like the ones I've read around here, but it is true that I have to find how to make it work.
 
  • Like
Reactions: Johannes S
Couple additional points to consider.

1) With HW Raid you'll probably use either LVM or XFS for the file system. Both provide great performance, but lets say in the future you want to add a second Proxmox server and setup HA and replication. This is where you'll have a problem as Proxmox replication only supports ZFS with no HW raid.

2) A problem we've discovered using ZFS (without HW raid) is when a disk fails it is almost impossible to identify which disk is the failed disk. How we solved this was after installing proxmox, we then inserted one storage disk at a time then documented its UUID number. By doing one at time we then mapped Port number to UUID.
 
  • Like
Reactions: ProxStarter
Couple additional points to consider.

1) With HW Raid you'll probably use either LVM or XFS for the file system. Both provide great performance, but lets say in the future you want to add a second Proxmox server and setup HA and replication. This is where you'll have a problem as Proxmox replication only supports ZFS with no HW raid.

2) A problem we've discovered using ZFS (without HW raid) is when a disk fails it is almost impossible to identify which disk is the failed disk. How we solved this was after installing proxmox, we then inserted one storage disk at a time then documented its UUID number. By doing one at time we then mapped Port number to UUID.
Interesting points.
So in the case of HW Raid, leaning a lot towards that, XFS is supported: https://helpcenter.nakivo.com/User-...Requirements/Supported-Platforms.htm#Physical

Replication is not in the plans for the future, but its interesting to know that it cannot be done.

This is going to sound virtualization 101, but in the case of Proxmox, I am. When we talk about ZFS, LVM or XFS, we are talking about the underlying filesystem of the Hypervisor right (kind of like VMFS for ESXi)? So after I create a VM, I should create the volumes and format them as XFS, ext4, ntfs or whatever, correct?
 
be aware that you may not be able to boot pve from drives connected to the raid controller, there are several threads in this forum about this.
if possible try to install pve and check if you can boot pve from drives on the raid card

solutions if you can't boot from drives attached to the raid card:
- either attach 2 ssd's directly to the motherboard's sata controllers and install pve with software raid 1 or zfs mirror on these
- use a hpe n204 / dell boss card (hardware raid 1) and install pve on that (no zfs)

depending on the age of your raid card you may be able to use it like a hba, newer ones support mixed mode (raid array and hba like passthrough)
if you delete the raid array the drives should be accessible to the os directly, but check the docs for your raid card to be sure
 
Interesting points.
So in the case of HW Raid, leaning a lot towards that, XFS is supported: https://helpcenter.nakivo.com/User-...Requirements/Supported-Platforms.htm#Physical

Replication is not in the plans for the future, but its interesting to know that it cannot be done.

This is going to sound virtualization 101, but in the case of Proxmox, I am. When we talk about ZFS, LVM or XFS, we are talking about the underlying filesystem of the Hypervisor right (kind of like VMFS for ESXi)? So after I create a VM, I should create the volumes and format them as XFS, ext4, ntfs or whatever, correct?
Hello, yes you are correct, like with VMware when you format a datastore you have to select VMFS5 or VMFS6. But with Proxmox the options for formatting the storage is LVM or ZFS when your underlying storage is on hardware raid. If you want to use software raid this is when XFS is utilized which then opens up other advanced functionality like high availability and replication.