Looking for suggestions on disk setups.

jorel83

Active Member
Dec 11, 2017
26
2
43
41
Hi,

Im thinking to redo my setup regarding disk setup

Supermicro SuperServer 4 servers in 2 unit chassie 96 Xeon threads >512Gib RAM and 6 disk slots per host with 10Gib network.

Current setup is PVE5.2 with Ceph and for some stupid reason I was running the drives in raid 5, works quite well, might be a bit sluggish, so obviously need to change that due to the nature of Ceph. On top of that I have a faulty SAS-card and Motherboard that needs to be change so I'm thinking to move the few machines to one host and then reinstall 3 servers and move the VM:s over to them and do the last one.

Currently the PVE is running on Samsung 860 250Gb SSD:s and probably will continue with that. But for storage I have these drives available:

Samsung Evo 1Tb 860 Pro 4 pcs
HP 1 Tb SAS 7.2k 2 pcs
Dell 1 Tb SAS 7.2k 2 pcs
Hitatchi 600Gb SAS 10k 8 pcs

I was thinking to have two servers only running ssd:s and the other two running SSD + the SAS-drives, or is this a stupid thing to do? Is there a better way to combine the SAS drives with SSDs or should I give up the mechanical drives and loose storage capacity?

Thanks for any suggestions

BR/Joel
 
Hi Jorel83 , my opinion is:
Use hp or dell in mirror raid on ext4 format for system install
Samsung with dell/hpe(depend how hdd you use for system) mirror raid same on ext4
600gb hdd's same you can use on mirror raid or on raidz on zfs format

Into my experience ext4 is more faster if you want to create vm's for example
ZFS have a suprior type of compresion but a loss speed.
I recmoand ZFS only for storge for example if you want to install a CT wit turkey filestorage template (this tamplate have samba webmin apache ... you cand search on google) and create a network share storage (you can add this storage on all computers from you network.. well depend the network but in generaly you can :D )
 
Thanks for the suggestion :)

My hardware is same kind of server as Nutanix Whitelabel Supermicro Super Server and the entire point with it is to avoid raids and NFS/SAN etc. I suppose Nutanix uses some propreitary Ceph thingy to deal with it. However my server has SAS LSI raid and Ceph is not recommended to be run on RAID it works but my experience is that its probably a bit slower than normal.

Nutanix AHV is nice to use but not working on my hardware and I really like Proxmox VE.

So I'm curious about a solution where PVE is run on SSD:s and preferably two hosts with SSD:s and then two hosts running SSD + SAS for storage, and if this is a good solution or if there are better solutions out there :)

Br/Joel
 
Hi,

I would make like this:

- use 2x 600 gb for each server (debian os install for Os using md raid1 for let say 50 gb then install proxmox , the rest for storage - zfs or whatever - slow storage)
- use for 2 server only 1 hp + 1 dell with mirror (zfs or whatever) for storage(medium fast storage)
- for the next 2 servers use 2xSSD (mirror, zfs or whatever) for storage(fast storage)

So in the end you will have a slow storage on any server (good for backups), 2 medium speed storage on only 2 servers (for vm and/or ct) and another 2 fast storge on only 2 servers (good for zfs, database vm/ct, whatever)
 
Into my experience ext4 is more faster if you want to create vm's for example
ZFS have a suprior type of compresion but a loss speed.

ext4 is not so fast compared with zfs if you setup properly. But, ext4 can be very slow in a mirror if your mirror will discover some bad blocks, and you need to resync your data. ext4 si very slow if your power is go down, or you reach the N days of usage without a fs check.
ext4 speed will not help you if your hardware will create errors, so your raid is ok but your data is unrecoverable (zfs runtime auto repair with checksums and parity)
ext4 will not help you if you want to make remote backup(zfs send recive)
ext4 will not help you to restore your data in seconds (zfs rollback)
ext4 will not help you to have a short backup time (zfs snapshot)
ext4 will not help you to do live migration very fast (pve-zsync)
ext4 will not have very soon any encryption support
- ext4 can not compress your data even if your data is highly compressible ( for files like txt, xml, docs, xls and others you can get at least 50% compress ratio on zfs)

Yes, zfs need many hw resurces, many knowledge, is slow, but is a best tool for your very important data if you do not make fulish mistakes.

I try to say that speed is only a small details, and in many cases, your data safety could be more important.
 
Thanks for the suggestions, what about using Ceph and SSD for journal system? Such as this: https://whiskeyalpharomeo.com/2016/04/09/proxmox-ceph-and-ssd-journals/

Ceph is truly a marvel of functionality, what i didnt mention before is that i have (NFS) offsite backup scheduled and that works perfectly, 1 Gbit connection is working very well. And Ceph and live migration is truly great.

So anyone care to ellaborate on why running zfs (or else) instead of Ceph, after all its a 4 server cluster with high RAM, CPU and 10Gbit fiber? Only bottlenecks is backplane to the drives and the base of only using 6 pcs 2.5" per server (ignoring old cpu) (Basically the server is one of these but modified: http://www.supermicro.com/products/system/2U/2026/SYS-2026TT-H6RF.cfm)

Br/Joel
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!