Help with setting up storage in a raid configuration

EV4NSY

New Member
Sep 24, 2022
8
0
1
Hey guys, Im pretty new to proxmox, installed it a few times to play around with but i am a complete noob so just a heads up, ive just done a fresh install and wanting to get it setup for hosting myself and some friends a few VM's for game servers, such as rust, minecraft & dayz etc

ANYWAY, the system i am using is a HP ProLiant DL160 GEN9, it only has 4 bay drive slots so for now ive just popped in 4x 500gb seagate HDD's and this server has a built in raid controller. Of course just in case of any problems or HDD failure i dont want data loss so im wanting to setup these up in a raid configuration by using the servers built in raid controller (unless you can provide a better method?) i have created 2 arrays. each array contains 2 drives in a RAID1 configuration. So for example there is 4 drive bays. Bay 1 and Bay 4 have been setup in an array and Bay 2 and Bay 3 in another array all in RAID1. now im wanting to add these as LVM storage so i can use these for storing .iso files and also the VM hdd partitions. I have setup this up before but without having them in a RAID configuration and why im needing help.

How would i proceed in adding these to my proxmox installation while also maintaining RAID functionality. For the OS i am hosting it on a 16GB usb so i can use all 4 bays for as much storage as possible. I will include some screenshots of how my proxmox currently looks. These HDD's shouldnt have anything on them and if theres anything i need to do such as formatting them please let me know as im not sure why one is GPT and the others arent. Also ignore my Snipping Tool "skills" lmao
 

Attachments

  • proxmox1.png
    proxmox1.png
    234.8 KB · Views: 113
  • proxmox2.png
    proxmox2.png
    169.5 KB · Views: 99
  • proxmox3.png
    proxmox3.png
    207 KB · Views: 115
How would i proceed in adding these to my proxmox installation while also maintaining RAID functionality. For the OS i am hosting it on a 16GB usb so i can use all 4 bays for as much storage as possible.
Thats a bad idea. PVE will kill it in no time. PVE should be installed to a SSD or HDD that can handle the writes.

And in most cases HDDs won't be great as a storage for virtual disks because of the terrible IOPs performance. I would create a raid10 of those 4 HDDs and then install PVE with LVM-Thin ontop of it. That way you at least get double the IOPS performance (which is what should be you biggest bottleneck) and still get redundancy.

Have a look at the chapter "Advanced LVM configuration options" here: https://pve.proxmox.com/wiki/Installation
Its described there how to define how much of the storage to use for swap, root filesystem and VM storage. Its no problem to use the same raid array for PVE + VM storage. So a single raid10 should be all you need.
 
Last edited:
  • Like
Reactions: EV4NSY
Thats a bad idea. PVE will kill it in no time. PVE should be installed to a SSD or HDD that can handle the writes.

And in most cases HDDs won't be great as a storage for virtual disks because of the terrible IOPs performance. I would create a raid10 of those 4 HDDs and then install PVE with LVM-Thin ontop of it. That way you at least get double the IOPS performance (which is what should be you biggest bottleneck) and still get redundancy.

Have a look at the chapter "Advanced LVM configuration options" here: https://pve.proxmox.com/wiki/Installation
Its described there how to define how much of the storage to use for swap, root filesystem and VM storage. Its no problem to use the same raid array for PVE + VM storage. So a single raid10 should be all you need.
Thank you, really appreciate it
 
Also keep in mind that the root filesystem is the only place where you can store files and folders. So backups, ISOs, templates and so on can only be stored on the root filesystem. By default the biggest part of the array will be used as VM storage where only virtual disks of VMs/LXCs can be stored.
PVE itself needs about 16 GB of storage, so something like 48GB for "maxroot" would be fine if you just need 32GB for ISOs and other files.
 
  • Like
Reactions: EV4NSY
Also keep in mind that the root filesystem is the only place where you can store files and folders. So backups, ISOs, templates and so on can only be stored on the root filesystem. By default the biggest part of the array will be used as VM storage where only virtual disks of VMs/LXCs can be stored.
PVE itself needs about 16 GB of storage, so something like 48GB for "maxroot" would be fine if you just need 32GB for ISOs and other files.
Ahh right, yeah. Ill only need at least 2 ISO's max since i like to use debian and maybe a windows ISO just to play around with. The only problem im having now is the server im using (HP ProLiant DL160 Gen9) and the smart array controller. I've entered the system BIOS and accessed the "Intelligent Provisioning" then the "HP Smart Storage Administrator" to setup the 4 drives in a RAID 1+0 which sets up fine. But when im running the proxmox installation and get to the part about choosing a HDD, im seeing the list of all available drives including the 4 separate drives. Not sure if this is correct or a problem (new to RAID and just learning about it along w/ proxmox) but i was expecting another option listed that indicated i was selecting an option for the Raid Controller? Does this not matter and i can just select any drive such as "dev/sda" ? Sorry for being a complete noob, just wanna learn as much info as possible
EDIT
**Found the logical drive name: /dev/sdf**
 
Last edited:
Usually when booting the server there should be a post message of the raid controller where you can press a key combination to enter a menu to create and configure your raid array. That raid array then should be presented to PVE as a single 1TB disk (in case you created a raid10 of 4x 0,5TB disks) and you then install PVE as LVM-Thin to that single "disk". If PVE can see the 4 individual disks you did something wrong.
 
Usually when booting the server there should be a post message of the raid controller where you can press a key combination to enter a menu to create and configure your raid array. That raid array then should be presented to PVE as a single 1TB disk (in case you created a raid10 of 4x 0,5TB disks) and you then install PVE as LVM-Thin to that single "disk". If PVE can see the 4 individual disks you did something wrong.
Yeah im definitely doing something wrong lol. If i setup the 4x drives in RAID10(1+0) that creates a logical volume with the following (/dev/sdf) but in proxmox im still seeing the 4 drive bays and the /dev/sdf is showing the Internal SD card that has ESXI (from the business i purchased the server from). What about if i just scrapped the smart array completely and do the following, I setup a 4x drives not in an array, then in proxmox when selecting target hard disk, i select options > filesystem ZFS RAID10 then the following 4 drives?
Anyway, Ill include a screenshot of what it looks like rn after i have setup the drives using the smart array in RAID1+0
 
Software raid using ZFS also got its benefits, like blocklevel compression, replication, bit rot protection, you can move the disks to another server without a problem, and so on. But it got way more overhead (so less performance), you loose additional 20% of your capacity (because it uses copy-on-write and 20% of the usable capacity should be always kept free for optimal operation) and ZFS needs a lot of RAM for caching (4-6GB RAM should be acceptable in your case). But then case you shouldn`t use any HW raid. See here: https://openzfs.github.io/openzfs-docs/Performance and Tuning/Hardware.html#hardware-raid-controllers
So you would need to disable all raid features first and use the disk controller as a dumb HBA.

A raid10 would be called "striped mirror" in ZFS terms and would be the raid of choice when using HDDs as VM storage.
 
Last edited:
Software raid using ZFS also got its benefits. But it got way more overhead, you loose additional 20% of your capacity and ZFS needs alot of RAM for caching (4-6GB RAM should be fine in your case). But then case you shouldn`t use any HW raid. See here: https://openzfs.github.io/openzfs-docs/Performance and Tuning/Hardware.html#hardware-raid-controllers
So you would need to disable all raid features first and use the disk controller as a dumb HBA.
Ah right, my system has 160GB ram anyway so losing a couple GB shouldnt be a problem
 
You can save the backup, iso and template on every storage where u have defined it. Its a configuration.
Also, I would say that you use a HW Raid10 + LVM Thin configuration but not zfs if possible because of the performance lost like Dunuin mentioned.
Finally, you wanna build a gaming server so you need speed.
Why not install the pve system to the intern flash drive like vmware?
 
You can save the backup, iso and template on every storage where u have defined it. Its a configuration.
Also, I would say that you use a HW Raid10 + LVM Thin configuration but not zfs if possible because of the performance lost like Dunuin mentioned.
Finally, you wanna build a gaming server so you need speed.
Why not install the pve system to the intern flash drive like vmware?
Hi, sorry for the late response. I did at first have it installed on an internal flash drive (then mentioned an SD card)but a few people on reddit recommended me not to due to the high read and write speeds proxmox does and i believe someone above also mentioned this as it would kill the devices quite rapidly. Originally i did not want to install the proxmox OS on any of the hard drives in the bays as i only got 4 bays and would have wanted them in a raid 10 config. With the server im using right now HP ProLiant DL160 gen9, i was having problems setting up the storage in a HW raid 10. If i used the smart array and created an array with all 4 drives in raid 10. the proxmox installation would still list all 4 drives as single drives and wouldnt show the option eg /dev/sdf. In the smart array it would tell me the drive location eg, the one listed a second ago but that same location wouldnt be shown in the proxmox installation if that makes any sense. Ive tried looking at many guides and tutorials but struggled to find the right one that would work with my server. I am a complete noob when it comes to this.

If you look at the post above this one, you'll see an image. That is what i would see after creating a smart array raid 10 config. If anything is unclear or i have pronounced something wrong, let me know and ill try reword it ahaha
 
if u see the picture as output after a raid10 config than u do something wrong to the raid configuration. But according to your first post u config 2 raid1 arrays. nevertheless, u can create a LVM ontop of a raid 10 under the point Disks of the Server administration menu subpage.
 
  • Like
Reactions: EV4NSY
if u see the picture as output after a raid10 config than u do something wrong to the raid configuration. But according to your first post u config 2 raid1 arrays. nevertheless, u can create a LVM ontop of a raid 10 under the point Disks of the Server administration menu subpage.
Ahh right, thank you!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!