[SOLVED] Drive/RAID Configuration for Proxmox - Advice/Guidance

Y0nderBoi

Well-Known Member
Sep 23, 2019
30
4
48
30
Hello,

A nice Supermicro Server (CSE-216A-R900LPB, X8DTU-F mobo) has come into my possession from my employer. I am attempting to get a simple homelab setup to practice virtualization, devops and security work. It has come with twenty-four 120GB SSDs with an Adeptec RAID controller. I could use some help with configuring it. I've read through the wiki, admin guide and watched a few videos on proxmox but most don't seem to deal with RAID installation and configuring.

I initially created 2 RAIDs, 1 being just a 120GB volume for the OS and then the rest of the drives being a RAID 5 config. After installing and then opening up the web GUI. I created a new LVM with the rest of the space and now I cannot seem to do anything with it. Including delete it. So I figured I would probably just recreate a new RAID with your recommendations and then reinstall Proxmox. And then configure the storage in the GUI based on info I get from here.

I don't really care about redundancy I just want to be able to use all of the space and power this server has. Any ideas or recommendations? Full disclosure I am not a storage/sysadmin guru, which is why I am trying to get this lab running. Most of the other homelab stuff I have done have been on a single HDD.
Any help would be great.
 
If you do not care about any redundancy and are willing to risk loosing all the data if one drive fails, you would setup raid 0
- allow you to use all the space and maximum performance, but at the risk if one drive fails you loose everything....

If you would like to not loose the data due to a drive failure, you may want to think of raid 5 or raid 10.

Your initial setup of two raids: raid 1 for OS and raid 5 of storage would be fine.
- When you say you could not do anything with it, what do you mean ?
 
If you do not care about any redundancy and are willing to risk loosing all the data if one drive fails, you would setup raid 0
- allow you to use all the space and maximum performance, but at the risk if one drive fails you loose everything....

If you would like to not loose the data due to a drive failure, you may want to think of raid 5 or raid 10.

Your initial setup of two raids: raid 1 for OS and raid 5 of storage would be fine.
- When you say you could not do anything with it, what do you mean ?
Yeah RAID 5 and 10 what was I was looking at for my next option.
As for not being able to do anything, in the GUI I could not do anything. I could not create an LVM, I could not delete and LVM, couldn't upload any files to the content section. And the storage space was tiny.

Of course most of this could stem from my basic level of knowledge when it comes to data storage and RAID configs.
 
Last edited:
Or maybe would running each drive in a RAID 0 makes things simpler? Essentially I want to be able to install Proxmox and then have any left over space dedicated to VMs and storing ISO files.
 
Last edited:
Ideally you would want to send all data drives to the OS as JBOD so you can use zfs. Through the RAID controller you're losing file system level snapshots and backup. if your adaptec could be set up for passthrough, do so.
 
Why is ZFS ideal ??? You can perform snapshots via the GUI if you run LVM and you can backup the VM's as well. One of the nice things about ZFS within proxmox is the ability to replicate the VM to another node (if available). However, ZFS does require a higher memory overhead and in a lot of cases the raw performance is not a great as a good hardware raid controller running standard proxmox LVM setup.

In my 3 node cluster: (3 dell nodes)
The proxmox nodes run on a H710P raid controller with the SSD's setup in raid 10 & raid 5 depending on the node
In each system there is an NVMe drive setup with ZFS: any VM on these drives are replicated between the other nodes
Additionally, I run a freenas node running ZFS 8x2 ssd (raid 10) mirrors which provided shared storage to the cluster via 10G multipath ISCSI

The hole setup has been running for some time and the performance has met expectations.

With the hardware available in the original post, I would just run proxmox OS in raid 1 and the rest of the ssd's (22 disks) or a subset in either a raid 10 or raid 5.

- Contrary to popular opinion you can always create one big hardware raid using the available disk and put ZFS on top of it. Works well an you get the ZFS snapshot and replication features.
 
So ZFS seems like a more ideal and simpler solution. I am not against simply removing the RAID card, since I have never really used RAID. So again it seems like I should create a RAID 0 of 2 disks for the OS and then a RAID 5 for the other twenty-two drives? Then in Promox, how would the RAID 5 appear? Would it be just one giant glob of storage?
 
Why is ZFS ideal ???
sigh. If this is an actual question you may want to do a forum search. If its rhetorical I have no reason or desire to change your mind.
So again it seems like I should create a RAID 0 of 2 disks for the OS and then a RAID 5 for the other twenty-two drives?
In general, you want to avoid single parity volumesets, but for storage housing virtual disks striped mirrors perform best. For non virtual disk storage RAID6 (RAIDz2) would work.
 
You are going to have to explain that a little simpler for me. Sorry, I'm a bit of a noob.
alexskysilk said:
In general, you want to avoid single parity volumesets,
So avoid a single volume set AKA 2 SSDs in RAID 0?
alexskysilk said:
but for storage housing virtual disks striped mirrors perform best. For non virtual disk storage RAID6 (RAIDz2) would work
Not sure what the difference between virtual disks and non virtual disks in this instance.
 
I'm not sure if I am being clear enough either. So essentially I have 24 120GB SSDs. I want to install Proxmox on my server. I don't really care about having redundancy but I do want the ability to take snapshots/backups of VMs. Essentially I want to be able to install Proxmox on those drives and then have all of the leftover space from the install be used as a pool of storage for VMs and LXCs.

Does that make sense?
 
You are going to have to explain that a little simpler for me. Sorry, I'm a bit of a noob.
There are two primary techniques to aggregate disks- simple mirroring (data simply duplicated) or erasure coding/parity, a method to create recoverability using existing survivor data. RAID 1/10/0+1 are examples of mirroring, while RAID5/6/Z/Z2 are examples of parity. Without getting technical (wikipedia would do a better job of explaining them me,) striped mirrors achieves far superior performance for multiple IOs such as virtual disks (your VM disks) then stripesets at a cost of higher space overhead, while stripesets offer superior sequential performance and superior parity overhead efficiency.

For your usecase I envision 3 separate volumes:
2 drive RAID1/mirror (boot). usable space= 120GB
12 drive RAID10/striped mirror (for virtual disks). usable space= 720GB
10 drive RAID6/RAIDz2 (for non vm data storage) = 960GB.

The actual distribution of disks between your VM and non VM volumes are up to you based on space requirements. Its also possible to combine the boot disk with either of the other volumes but best practices are to keep it separate.
 
  • Like
Reactions: Y0nderBoi
There are two primary techniques to aggregate disks- simple mirroring (data simply duplicated) or erasure coding/parity, a method to create recoverability using existing survivor data. RAID 1/10/0+1 are examples of mirroring, while RAID5/6/Z/Z2 are examples of parity. Without getting technical (wikipedia would do a better job of explaining them me,) striped mirrors achieves far superior performance for multiple IOs such as virtual disks (your VM disks) then stripesets at a cost of higher space overhead, while stripesets offer superior sequential performance and superior parity overhead efficiency.

For your usecase I envision 3 separate volumes:
2 drive RAID1/mirror (boot). usable space= 120GB
12 drive RAID10/striped mirror (for virtual disks). usable space= 720GB
10 drive RAID6/RAIDz2 (for non vm data storage) = 960GB.

The actual distribution of disks between your VM and non VM volumes are up to you based on space requirements. Its also possible to combine the boot disk with either of the other volumes but best practices are to keep it separate.
This is perfect. Few more things, what exactly would the use case of the non vm storage be? Holding ISOs, snapshots, backups etc I'm assuming? And should I just ditch the hardware RAID card and use the software RAID in Proxmox?
 
This is perfect. Few more things, what exactly would the use case of the non vm storage be? Holding ISOs, snapshots, backups etc I'm assuming?
Yes, just not snapshots. snapshots are kept with the vm disk.

And should I just ditch the hardware RAID card and use the software RAID in Proxmox?

In general, yes, provided you have adequate ram. The rule of thumb is 1GB RAM per terabyte of usable space but realistically its not linear- You would want 8GB for the above config, while 32GB-64GB would be fine for 100TB or more. mind you that this number is in addition for what you need for the healthy operation of the os+proxmox's various processes (~8GB) plus ram for your vms. It is also worth noting that RAM used for ZFS should really be ECC or bad things could happen (very unlikely but non zero chance)

If you do the ram calculation and find that your short, you have options:
1. add more ram ;)
2. If your ram capacity is marginal and is short some, you may be able to live with it in combination of swap. Linux is very good about managing ram usage.
3. If your ram capacity is substantially less then what you need based on the above, Its possible to NOT use all disks for ZFS; set boot and non vm data using LVM and only vm storage using ZFS
4. As suggested above, use your raid controller to create the virtual disks and use LVM for all, but considering the small-ish size of your datasets adding ram should be the preferred solution.
 
I have around 48GBs of RAM so I don't think I should have any issues. Now how would I go about configuring these volumes without the RAID card. Normally I would boot into the RAID card, create the 3 volumes and then boot to the installer and go from there. But since there won't be a card, do I create it during the install or after the fact?
 
Once the raid card operates in passthrough mode (or you replace it with an HBA) you will simply see all 24 drives, and you will install proxmox on a mirror using 2 of them which you will create through the proxmox installer. The rest you will do after the OS is installed.
 
sigh. If this is an actual question you may want to do a forum search. If its rhetorical I have no reason or desire to change your mind.

Honestly, I am the one sighing now if that is your answer.........
I think the real answer:
- it depends on the use case, available resources and what you are trying to get out of the setup.

Simply telling someone you need ZFS as the first and only option and that it is the best thing/only thing which should be used is misleading.
This is especially true when someone will need to invest more $$ in a real HBA (If the raid controller does not work as an HBA)

- what else has not been mentioned: additional costs for a slog device or running with sync=disable if performance is a concern

Anyway, I am trying to point out that like with anything in life there are a lot of considerations and final use case/expectations/budgets should be used in determining the best setup
 
Last edited:
Build your RAID with RAID card and format RAID pool with BTRFS activate comperesin on that pool. RAM cost is zero, BTRFS have CPU IO cost. So in computer system all system have diffrent type cost some one want dedicated RAM some one create CPU IO...
 
Anyway, I am trying to point out that like with anything in life there are a lot of considerations and final use case/expectations/budgets should be used in determining the best setup
That I can completely agree with. I noted as much further down the discussion. For the OPs use case, the benefits of ZFS should outweigh any potential costs (which would be insignificant generally speaking.)

Build your RAID with RAID card and format RAID pool with BTRFS activate comperesin on that pool.
Possible but not advisable for proxmox. There is no integration between proxmox and btrfs; you'd have better UX using LVM especially wrt snapshots.
 
Brother on computer system free never be free, all system have cost. For example old EXT4 and XFS system people think that system RAM cost zero but not, they using RAM as a cache and buffer and use more more RAM from ZFS and BTRFS. Also ZFS and BTRFS have inline ( on the fly ) compression system means that two system disk capacity cost lower than other all..

for BTRFS you not need any integration on Proxmox side, but if you want lower cost high performance system ZFS is best for now and better selection for now.. for Sustainable IO, ZFS is unic ..
 
So, its been a minute and I hate to revive my old thread. But I reinstalled proxmox with a RAID configuration mentioned above. This is what my current "Disk" situation is looking like. Now how do I get this into usable storage?
1569635599685.png
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!