Recommend me a configuration

Bedlore

New Member
Dec 15, 2014
27
0
1
I'm about to create a migrate my existing proxmox server, I sell VPS hosting. My current server has 16GB Ram and 2 x 250GB SDD (Raid10), my new server will now also have 2 x 4TB HDD (Raid10) and I'll likely increase the ram to 32GB in the future at some point. The current server stats indicate; SWAP 8GB (971MB Used) and HD space (root) 25GB (3.66GB Used).
So it seems I can quiet safely lower my root partition to 10GB, and I know on SSD drives you should always use ext4. So I'm thinking to install the next server with "linux ext4 maxroot=10 swapsize=20"?

While the main server and all VPS will run on the SDD, (usually sold at 10GB blocks) I intend to use the new 4TB for;
  • Client self managed backup snapshots
  • ISO / OpenVZ Templates
  • Client increased storage options, eg. NFS mounted partitions inside a CT (haven't worked out how to do this elegantly yet, any ideas?)

I'd really appreciate feedback and suggestions.

Thanks
 
I've been reading further on storage models in the wiki, it looks like ZFS is a superior way to go presuming its stable enough for production now. If I was to go ZFS from what I understand I should disable hardware raid, or at least configure it to JBOD and preferably on a server that doesn't use UEFI. How then would I best install and configure? eg. Would running RaidZ over 3 x 4TB SATA disks and using the SSD's as cache + logging provide the most fault tolerant and best data expandable system?

Server specifics which I thought would help.
Dell R710
2 x Intel X5675 (12 cores, 24HT at 3.06Ghz).
48GB of RAM
6 x 3.5" drive spaces
 
Last edited:
I would strongly recommend raidz2 when using such large disks. Resilvering a raidz with 4TB disks can easily take 24-48 hours depending on workload and you have to remember that if you loose a second disk while doing resilver your entire pool is gone. raidz2 can sustain loosing 2 disks.
 
Thanks mir, I hadn't considered the resilvering point and it really got me re-thinking things. I've changed tack due to this and now think I'll go smaller and full SSD like this;
| D1 | D2 | D3 | D4 | D5 | D6 |
SSD 480 | SSD 480 | SSD 1TB | SSD 1TB | SSD 1TB | SPARE |
[--- ZFS RAID 1 ---] [---------- RAID Z1 --------------]

To avoid cost I won't run a hot spare but thought I'd leave one bay empty to make disk upgrades easier to perform. I figured with the earlier 4TB HDD I would of had to run at least one SSD for cache/log to get reasonable performance back. Going full SSD I won't have to and will the extra storage would of been nice I can make do with the 2TB until SSD prices drop further.

Any other ideas from anyone?
 
Last edited:
Not sure I understand the levels there - are you planning on creating 2 ZFS pools?

Pool1: SSD480 | SSD480

Pool2: SSD1TB | SSD1TB |SSD1TB

Regardless - you might want to consider enabling lz4 compression, it gives impressive results - my zfs pool has 630GB of VM's only using 373GB of space, a compression ration of 1.67, with no noticeable drop in performance.
 
Not sure I understand the levels there - are you planning on creating 2 ZFS pools?

Pool1: SSD480 | SSD480

Pool2: SSD1TB | SSD1TB |SSD1TB

Correct, with ZFS Raid 1 on pool1 and Raid-Z1 on pool2, then I have a spare bay. I'll definitely use the compression thanks for the tip. Not sure I'm confident enough to use template copying etc, have you tried that?
 
Not sure I'm confident enough to use template copying etc, have you tried that?

Do you mean using linked clones off a Proxmox Template VM? Yah I've avoided that, doubt my environment is stable enough for that, I'd always be wanting to update the parent. I do do full clones off templates though.
 
Yeh that's what I was thinking of. Any suggestions on how best to share extra space in a CT, say I wanted to give a CT on pool1 and extra 500GB of space on pool2?
 
Yeh that's what I was thinking of. Any suggestions on how best to share extra space in a CT, say I wanted to give a CT on pool1 and extra 500GB of space on pool2?

I don't use containers so not sure on the answer to that - but could you just allocate a drive off pool 2 and give it to the CT?
 
My hardware vendor got back to me and puts a damper on using ZFS. He said...
Couple issues I see... we utilise the Dell H700i RAID card in the r610/r710's as this provides a good cache and is proven with SSDs to be reliable. However the H700i doesn't support JBOD. It can be done, by setting all the drives in your 1TB SSD array to RAID-0 and then running the ZFS over the top. I personally wouldn't recommend this however as it can cause issues with integrity and I'm really not comfortable with you having a data loss incident. The issue with RAID-Z1 in an all flash array is that it stresses the drive considerably as it's similar to a RAID-5 setup whereby the parity drive has to erase blocks to 0's and then write the data. With RAID-Z1, this doubles the writes every time. Have a Google of RAID-5 SSD and you'll see some others echo my statements.
Humph... nothings ever easy. It looks like using ZFS on this hardware is out of the question. Its really hardware raid or nothing. :(
 
My hardware vendor got back to me and puts a damper on using ZFS. He said...

Humph... nothings ever easy. It looks like using ZFS on this hardware is out of the question. Its really hardware raid or nothing. :(

Bummer, a shame because ZFS is very flexible and nice to play with. However he is write about the lack of JBOD being an issue - I had the same problem with my original config, I setup a logical volume for each drive (6!) on the LSI drive controller and it was a real PITA, plus I lost all the hotswap capability. In the end I pulled the card and just used the SATA ports on the motherboard, much easier.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!