Hey All - Best Practices for New Install for a New User....

dmarc2024

New Member
Dec 6, 2024
7
1
3
Hi -

I have a Dell T620 server - 512GB RAM, 10Gig NIC and it has two HBA controllers running in IT mode:
  • The first controller has 2 SSDs on it, 1TB in space - was thinking it would be in a RAID1 for the Proxmox OS only?, maybe some ISOs but no VMs.
  • The second controller has 12x10TB Drives spinning rust connected to it - was thinking these all need to be in a ZFS pool, but am unsure and if so, what kind pool?
    • This would be for VMs, one VM in particular would be running Windows Server and I want to use it to file sharing/storage so it would have a 15-20TB of space alloted to it. Thick or thin provisioned?
I'm used to FreeNAS/TrueNAS being the only baremetal installed OS and having all the drives in a ZFS Pool - and using SAMBA/iSCSI/CIFS to share files but, I don't think that'll play well with hosting VMs in it so I'm trying to figure out the best way to configure the hardware. Probably overthinking it...

So my question is, if you had the same hardware how would you configure Proxmox?

I keep reading about CEPH on here, but not sure I need that as it appears I'd need 3 proxmox boxes with the same size drives and that'll be pricey. I could get two more different supermicro servers - but they'd have much less storage as they are 1U rack servers compared ot the 4U T620.

Thanks for reading
 
  • The first controller has 2 SSDs on it, 1TB in space - was thinking it would be in a RAID1 for the Proxmox OS only?, maybe some ISOs but no VMs.
A whole 1TB just for proxmox OS and ISOs is overkill. You can put some VMs on it since it will be a ZFS mirror, just be sure to leave free space for snapshots - and backup regularly to separate media / NAS. You don't exactly need PBS for this, but it does help with dedup and other features.

Make sure you read up on the docs for replacing a failed ZFS boot disk, and it's worth your while to test the procedure in e.g. a VM

https://pve.proxmox.com/pve-docs/pve-admin-guide.html#chapter_zfs
^ Search page for "Changing a failed bootable device" - you should be familiar with this procedure BEFORE a disaster happens

  • The second controller has 12x10TB Drives spinning rust connected to it - was thinking these all need to be in a ZFS pool, but am unsure and if so, what kind pool?
    • This would be for VMs, one VM in particular would be running Windows Server and I want to use it to file sharing/storage so it would have a 15-20TB of space alloted to it. Thick or thin provisioned?
If you want decent interactive response, then you want a pool of mirrors. If it was just storage then I would go with RAIDZ2 with all 12.

REMEMBER to have at least a couple of spare disks on-hand for same-day replacements, unless you want to wait for shipping. This goes for the boot drives as well as the 10TBs. Monitoring a failing array is... worrisome, until the replacement disk arrives. And even then you need to burn-in test it before putting it into use (full DD write zeros followed by a SMART long test, to weed out shipping damage)

https://github.com/kneutron/ansitest/blob/master/SMART/scandisk-bigdrive-2tb+.sh

Up to you, but depending on the size of the disks (and the disk-space needs of your VMs) you could have multiple mirror pools to separate things out a bit. If you have a 4x10TB or 6x10TB "raid10 equivalent" pool the odds of losing both disks in the same "column" are probably more in your favor.

Always go with thin-provisioned unless you have a specific use-case that requires thick. Preallocating disks gives you slightly better I/O at obv. the cost of free disk space

You won't have lvm-thin unless you specifically set it up on separate media since you'll be going with a zfs-mirror root/boot, so that's one less thing to worry about.

https://github.com/kneutron/ansitest/tree/master/proxmox

Look into the bkpcrit script, point it at separate media / NAS, run it nightly in cron. Could save your ass one day ;-)
 
Last edited:
  • Like
Reactions: dmarc2024
  • The first controller has 2 SSDs on it, 1TB in space - was thinking it would be in a RAID1 for the Proxmox OS only?, maybe some ISOs but no VMs.
A whole 1TB just for proxmox OS and ISOs is overkill. You can put some VMs on it since it will be a ZFS mirror, just be sure to leave free space for snapshots - and backup regularly to separate media / NAS. You don't exactly need PBS for this, but it does help with dedup and other features.

Make sure you read up on the docs for replacing a failed ZFS boot disk, and it's worth your while to test the procedure in e.g. a VM

https://pve.proxmox.com/pve-docs/pve-admin-guide.html#chapter_zfs
^ Search page for "Changing a failed bootable device" - you should be familiar with this procedure BEFORE a disaster happens

  • The second controller has 12x10TB Drives spinning rust connected to it - was thinking these all need to be in a ZFS pool, but am unsure and if so, what kind pool?
    • This would be for VMs, one VM in particular would be running Windows Server and I want to use it to file sharing/storage so it would have a 15-20TB of space alloted to it. Thick or thin provisioned?
If you want decent interactive response, then you want a pool of mirrors. If it was just storage then I would go with RAIDZ2 with all 12.

Up to you, but depending on the size of the disks (and the disk-space needs of your VMs) you could have multiple mirror pools to separate things out a bit. If you have a 4x10TB or 6x10TB "raid10 equivalent" pool the odds of losing both disks in the same "column" are probably more in your favor.

Always go with thin-provisioned unless you have a specific use-case that requires thick. Preallocating disks gives you slightly better I/O at obv. the cost of free disk space

You won't have lvm-thin unless you specifically set it up on separate media since you'll be going with a zfs-mirror root/boot, so that's one less thing to worry about.
Awesome - thanks for the detailed reply.

Two questions if I may -

how does a mirror compare with a raid? Guess I need to watch some videos on that - initially I was just going to make a big raid2 pool.

Lastly, not sure I understand your last paragraph - thin provisioning will be an option I can use or no?

Thanks
 
  • Like
Reactions: Kingneutron