Dell c6320

Leo David

Well-Known Member
Apr 25, 2017
115
6
58
45
Hi,
Did anyone tried pve on Dell c6320 servers in testing or even production env ?
I'm thinking of using 2 x 4 nodes chassis for implementing a proxmox/ceph environment.
 
Hi,

In case this helps at all ?

I deployed proxmox on some vaguely-similar supermicro units (2 nodes in one chassis) in a 'semi-production test lab' for a client, and it was all very 'straightforward'. My experience with proxmox installs generally - Most servers (esp. from big vendors) use very 'standard' chipset etc. so there are typically no issues. Usually your biggest risk for hardware - is a 'weird raid controller on a atypical build server config from an unknown vendor' ..... as your biggest risk of complexity on an install.

In terms of ceph and goodness of fit, my general understanding is that 'many spindles are needed for it to work really well' -(and probably 10gig ether connectivity between all node members as well). I am not sure how many drives you will have attached; but that might impact the usability / capability / performance expectations of the ceph config. Local storage (SATA & SSD, Bcache) is pretty awesome. and vanilla NFS is pretty good, nice and easy for test lab..

Tim
 
Thanks a lot, very helpfull. Meanwhile I decided to use for a pre-production 2x Dell 6220. The main problem is indeed the Ceph hba. I would stick on a Lsi 2008 chipset for the ceph nodes being the fact that seems to be world-wide used as a jbod card and maybe the same controller in raid1 mode for os in the proxmox nodes.
On the other hand, I'm interested in Supermicro too for production, but just can't decide what would really fit for a 3 promox nodes and 3-4 ceph osd/mon setup.
Any particular thoughts regarding these or production tested Supermicro model?
 
Hi, glad to hear your dell 6220 units are in active pre-production. If the LSI controller you mention is 'fairly standard part' (which I am guessing it is - I think LSI HBAs are well supported on debian and thus proxmox) - then I would anticipate it should 'work fine'.

I have ~various clients on dell proxmox gear in production, and others on supermicro. And some others on "small office custom hardware builds" (blackbox AMD or intel quadcore "desktop CPU" with 32gig ram, SSD:SATA Bcache disk config local-only-storage) - they all work great.

I certainly would not hesitate to use supermicro interchangeably with Dell gear in core production.

In general my experience with quick ceph test was that - it only made sense in a scenario where I might have a cluster of ~10 or more proxmox nodes / each with ~8+ drives per node / with 10gig ether connectivity for ceph replication data network. Since I haven't actually had a project meet those specs yet I haven't actually build-and-deployed one of these in production.

ie, my stock production proxmox deployments at client sites tend to fall into one of 3(ish) 'general config styles'.
  1. trivial use case, single standalone proxmox host with only local storage, typically Bcache with SATA and SSD drives in raid done via Debian minimal install and proxmox-added-via apt-get after the fact. Or just vanilla HW raid under the hood.
  2. mostly trivial use case, 3 or more nodes in a cluster, with only local storage, and they are not needing live-migration benefits from shared storage / or are sufficient with 'shared nothing migrations that are slow // or single-node-point-of-fail if hardware node drops). Since my general experience is that HW nodes with redundant PSU and Disk fail less often than human error or overly-complex-environment-syndrome-fails.
  3. less trivial use case, 3 nodes or more with shared storage (ie, either shared fibre SAN style storage or iSCSI storage or NFS storage) via either (fibre) or (gig or 10gig) for 'storage traffic'. And again general observation that disk arrays of decent quality (redundant power and of course raid disks) - tend to fail less often than 'human oops' fails. So the single point of fail by having a single shared storage target is 'ok' most of the time.
  4. variation on <3> - using a 'fault tolerant' shared storage target of some kind, ie, provided by an HA-NFS filer appliance pair.

ie, I would only use ceph in a scenario where it truly is required in terms of disk scalabilty requirements / certain size of cluster / plenty of bandwidth (10gig) for ceph replication. But that is just me :)

Tim
 
Hi,
Succesfully reinstalled ceph on 4 hp's with LSI2008 ( IT mode flashed) and 10Gb network for both cluster and public network. Journaling on Intel DC3510 and Samsung sm863 ( better results with Samsung ). I have now up to 7000 iops for small files and ~2500-4000 iops for big files. 8 x Proxmox nodes ( 2 x Dell c6100 chassis ) with LSI2008 ( IR mode flashed ) and 2 x 10Gb network , all pieces connnected in a full 10Gb Netgear switch.
The main problem is that the throuput from pmx vm's to ceph cluster won't go beyon 1,5 gb/s for some reason...though.
Are there any particular performance tunings to be done on Proxmox nodes for 10Gb environment deployments ?
The thing is that from an ESX vm I reach about 3-4 Gb/s, whilst from a PMX vm not above 1-1,5Gb/s. This PMX vm has it's hdd on ceph too, as an rbd image. I know that it means that it will double the traffic ( traffic needed for the rbd image to be accesed by Proxmox, and traffic for the vm's os accesing ceph).
I will do more tests and tunings on ceph side, but by any chance anyone is aware of certain settings that can be applied to Proxmox nodes for this 10gb/s scenario, please share some...
Thank you very much !
Have a nice day, and weekend !

Leo
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!