New installation, hardware recommendation

ProxStarter

New Member
May 29, 2025
3
0
1
Hey all!
New guy around here. I come from Spain and I've been working in IT for the last 15 years, mainly in Windows and VMware ESXi systems.
Now thanks to Broadcomm, ESXi is out of the window, and here I am.

TL;DR: I'm looking for advice to build storage server to act as backup repository for my company, this is not a homelab question. So I'm debating myself whether to go with something I know or something new. I can find HPE DL380 G10 or Dell R740xd servers as the foundation of the system, but I'm struggling whith the storage adapters and all the HBA/RAID mess.

Long story:
The company I work for as the IT guy need to grow its storage capacity urgently, we work with a single file server holding 20 TB of data and I need to back that thing up.
So I've proposed a new server to act as a repository revolving around Veeam and its hardened repository, this will be in the realm of 60-80 TBs

Now, Veeam works in Windows, the hardened repository works in linux, and here comes issue 1, I need to virtualize the thing, use a VM for the Windows Server machine that will hold the Veeam software and another one that will be the linux repository.
So issue 1: Use proxmox or use HyperV. I'm inclined to go with proxmox given the good reviews around, but that's new to me.
If it was HyperV I know how to follow, RAID card, create volumes, and store the VMs in the different pools (ssds for OS, HDDs for storage), call it a day.

Now, being proxmox I've read hundreds of posts about the HBA/RAID controller issue and I'm at loss. I need to pick this thing from one place, and not DIY components from eBay. And here comes issue 2
Issue 2: If I buy the server with a RAID controller, then I don't know if I can or cannot try the Proxmox route, but if I buy the server with an HBA, I know for sure that I can't make it work with Windows HyperV. Yes I know that Windows can do dynamic disks, but I'd rather sit on a kerosene barrel with a candle.

In the HPE Team I have the HP P816i-a (4GB) controller
In the Dell Team I have the HBA330 and the H740p (8GB)
I might be able to throw an LSI SAS 9305-16i HBA, but again, I need to know how is it going to work.

Both options will be populated with SATA HDDs for capacity and at least 2 SSDs for the OS, swap, etc. This will be setup with double parity RAID6 or RAIDZ2

Rest of the part-list is similar in both cases, 1 or 2 procs below the 16 core count in total, at least 128GBs of ECC ram, 9 or 10 x 10TB drives.

So right now, I'm more inclined about the Dell H740p options based on some posts: https://forum.proxmox.com/threads/using-raid-in-hba-mode-or-remove-raid.163468/ or https://www.truenas.com/community/threads/hp-dell-hba-discussion-again.114252/post-791458
Note that I will not be running TrueNAS (although I'd like to) and that no hardware has been purchased yet.

Any pointers will be much appreciated.

Thank you all and apologies for this long introduction.
 
Are you planning to use a cluster and Ceph?

If not are you planning to use ZFS software mirror?

Those are the two things that don't work well with hardware RAID.

Note depending on the controller they can often be set as needed, for instance, we've reused some older hardware by setting each disk to "JBOD" in the (LSI) RAID controller.

In a vacuum I'd say use what you and/or others there are familiar with.

You didn't specifically mention it but Windows has Storage Spaces as well, for expandable/redundant storage.
 
  • Like
Reactions: Johannes S
i'm not an expert with this but the main issue seems to be the os / booting part. on certain systems grub or systemd-boot can't find the kernel / initrd / os when installed on a raid array. to circumvent this use a HPE NS204i / BOSS card (Hardware Raid 1, 2x 480GB NVMe Drives) for the pve os install.

if you buy / use one of the newer raid controllers (HPE Gen11++?) they have Mixed Mode (RAID & HBA at the same time), if you don't configure a raid array the drives should be passed through directly (in theory). ask your hardware partner about this just to be sure.

depending on your needs you can use zfs (raid controller hba mode) or create a raid array on the controller and use it in pve as lvm-thin, both options have their pros and cons.
 
Are you planning to use a cluster and Ceph?

If not are you planning to use ZFS software mirror?

Those are the two things that don't work well with hardware RAID.

Note depending on the controller they can often be set as needed, for instance, we've reused some older hardware by setting each disk to "JBOD" in the (LSI) RAID controller.

In a vacuum I'd say use what you and/or others there are familiar with.

You didn't specifically mention it but Windows has Storage Spaces as well, for expandable/redundant storage.
Hey thanks for the reply,
Nope, no clustering, just a single node, at least for now.

If not are you planning to use ZFS software mirror?
To be honest, I started to hear about ZFS when I investigated about TrueNAS, I've read so many positive things about it, and I thought that was "the" filesystem for Proxmox as well, is there other way? (did I say I know nothing about Proxmox?)

In a vacuum I'd say use what you and/or others there are familiar with.
Hmm yes, that's part of my issue as I said, I really want to explore Proxmox/ZFS and the opportunities it brings to the table, but I can't tinker with it as much as I'd like, this should be as solid as possible, and I'm the one to support it.
You didn't specifically mention it but Windows has Storage Spaces as well, for expandable/redundant storage.
True, I forgot about that. I've never used SS, I'd love to have some caching available for this setup, I'm going to be moving 10 TBs worth of backups regularly and I'd like it to be fast, but full flash is a bit over budget.
 
i'm not an expert with this but the main issue seems to be the os / booting part. on certain systems grub or systemd-boot can't find the kernel / initrd / os when installed on a raid array. to circumvent this use a HPE NS204i / BOSS card (Hardware Raid 1, 2x 480GB NVMe Drives) for the pve os install.

if you buy / use one of the newer raid controllers (HPE Gen11++?) they have Mixed Mode (RAID & HBA at the same time), if you don't configure a raid array the drives should be passed through directly (in theory). ask your hardware partner about this just to be sure.

depending on your needs you can use zfs (raid controller hba mode) or create a raid array on the controller and use it in pve as lvm-thin, both options have their pros and cons.
Interesting, I didn't think about booting the thing, I've worked with BOSS cards and I considered them a waste of space, they were a "must" with the latest versions of ESXi, but then again, not VMware anymore. My plan was to use a couple of mirrored SSDs for OS, but now that I think about it, who's going to do the mirror? Easy enough with a raid card, mounting the volumes but I've got no experience with HBAs

Guess I could probably connect the 2 SSDs directly to the motherboard and raid it there, use that to create the boot volume?

depending on your needs you can use zfs (raid controller hba mode) or create a raid array on the controller and use it in pve as lvm-thin, both options have their pros and cons.
Cool, I will have a look.

So it seems like I'm leaning towards the Dell and the H740p combination. And "hope" that I can at least try the HBA mode and mount the system as zfs, as a backup plan I can always do the good-ol-raid way as it's something I have more experience on.

Thank you for your help.

If anything else comes to mind be sure to update me!
 
you don't need a boss card but then you should make sure you can boot from the device you installed pve on.

Code:
Guess I could probably connect the 2 SSDs directly to the motherboard and raid it there, use that to create the boot volume?

that's certainly an option, if you install pve with a zfs mirror on this two ssd's it should work. just be aware despite all the good things about zfs it will wear out your ssd's faster than another file system - use enterprise grade ssd's with plp.

pve boot tool will "sync" the boot loader onto both ssd's if one die's you can boot from the other, you just have to enable both ssd's in bios for boot.

more info on pve zfs

and the official documentation
 
  • Like
Reactions: Johannes S