(2) SSD for OS and (4) 1TB HDD in RAID10 for VMs

tripflex

Active Member
Jan 18, 2013
13
6
43
smyl.es
Hey guys so recently I had a problem with a server crashing and since it was just a test server i'm going to go ahead and upgrade it to a new processor which comes with a few upgrades.

The server i'm looking at includes (2) 64GB SSD and i'm adding on (4) 1TB HDD with a hardware raid card setup in a RAID1+0 array.

My thought is to install the Proxmox ISO on to the SSD drive, then mount the RAID to use for the VM data (/var/lib/vz). I'm assuming I will have to first setup everything on the SSD, mount the RAID to something like /vztmp, move everything from /var/lib/vz, then modify fstab and set it up to mount as /var/lib/vz.

Does this sound correct and feasible? Is mounting /var/lib/vz the correct directory for all the VM data? Does anybody see any potential problems in doing this?

What about the SSD drives...technically the server comes with 2 of them so I was thinking to just use a software RAID1 for the SSD, but i'm curious to see what other peoples opinions are...Has anybody else setup something similar to this before?

Suggestions, comments, or constructive criticism is greatly appreciated!
 
Last edited:
Hey guys so recently I had a problem with a server crashing and since it was just a test server i'm going to go ahead and upgrade it to a new processor which comes with a few upgrades.The server i'm looking at includes (2) 64GB SSD and i'm adding on (4) 1TB HDD with a hardware raid card setup in a RAID1+0 array.My thought is to install the Proxmox ISO on to the SSD drive, then mount the RAID to use for the VM data (/var/lib/vz).I'm assuming I will have to first setup everything on the SSD, mount the RAID to something like /vztmp, move everything from /var/lib/vz, then modify fstab and set it up to mount as /var/lib/vz.Does this sound correct and feasible? Is mounting /var/lib/vz the correct directory for all the VM data? Does anybody see any potential problems in doing this?What about the SSD drives...technically the server comes with 2 of them so I was thinking to just use a software RAID1 for the SSD, but i'm curious to see what other peoples opinions are...Has anybody else setup something similar to this before? Suggestions, comments, or constructive criticism is greatly appreciated!
Hi,
some remarks:
1. Software raids are not supported but some (many?) people uses this of course. But with sw-raids you are not on the normal upgrade path.
2. SSDs for the OS is a little bit - in german called "Perlen vor die Säue" e.g. oversized.

Do you have the chance to use the two SSDs on the raidcontroller as raid1? (for lvm-storage)
And if you create from the raid-10-raidset two volumes - one small, like 20-100GB, for the system and one big for the data - you can simply install the system without tweaks on the small raid-set and use the big one as lvm-storage (like local_pve). After that, you can create an lv for openvz and mount this via fstab (like /mnt/local_vz).
You can use this storage in the gui for OpenVZ (and also for kvm, but in this case is the lvm-storage faster).
If you also use the SSDs as lvm-storage your VMs can use this storage for DB or so and the "normal" raid10 for data...

Udo
 
Thanks for the reply, and yeah after posting this I spoke with a couple people in the IRC channel and decided it would not be best to use the SSD for the OS and maybe look at using it for IO intensive VMs.

From talking to people in the #linux channel and google searching it seems as though using RAID in an SSD setup is almost counter productive as the lifespan is based on how many writes each nand sector can do and using raid increases the write almost killing the ssd even faster.

http://www.eggxpert.com/forums/thread/707662.aspx

I do however like your input on setting up the raid10 with the system and the storage and will look into that when setting up the server.

I do appreciate your input and thank you for your time in posting a reply.
 
I concur that proxmox doesn't benefit from having its system files on a SSD, mostly because you'll probably run it on server-grade hardware and you also mentioned a hardware raid controller. A server like that will spend 80% of its bootup time displaying BIOS POSTs and waiting for controller timeouts (the 1-5 minutes period before grub is even called) anyway that saving 5 seconds of actual linux boot time will not make a difference. As suggested by udo, I would rather use the SSDs for certain io-intensive VMs you plan on running - a (hopefully postgreSQL) database server would be a perfect fit for that.

Besides, @udo: "Perlen vor die Säue" is merely a translation from a latin text and as such exists in the english language (and is identical in meaning) as well: to cast pearls before swine. ;)

//EDIT: for clarification: I posted this at the exact sime time the posting above me was made, hence I didnt see it
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!