Proxmox cluster - disk layout for ceph

driftux

Active Member
Mar 3, 2010
30
0
26
Hi, I plan to build my first ceph cluster and have some newbie questions. In the beginning I will start with 5 nodes, and plan to reach 50 nodes.
Those nodes quite old (CPU E3,16GB RAM, 2x1Gbps network), so I think to gain the performance in adding more nodes but not upgrading RAM or CPU.
I have 4 ports sata on motherboard, and don't plan to invest in HBA expanders. Could you please advice me what kind of disk layout I should choose.
I plan to have 1 OSD on 500GB SSD enterprise drive. On each node I will put KVM or OpenVZ machine which will reside on OSD. Machines will be used for the shared webhosting. So Apache, MySQL, DNS, Postfix, POP, IMAP, webmail will be put in each machine - this means a lot of small reads and writes in the disk, so the IOPS is very important. In case of a node failure, cluster should power on the same VM from another node.

#1 QUESTION:
Please tell me am I in the right way thinking about the Proxmox cluster for such kind of service? I want HA, performance and easy management for such service.
I the most important it will give the peace of mind that if one server is down, it will start automatically and I the damaged one I will repair when I will have more spare time.
#2 QUESTION:
The second part is disk layout. Could you advise what to pick. I have plenty HDD disks, but every SSD I will need to buy. And I'm very tied on the budget.
Here is my variants:

A.
2 x 1TB HDD 5400RPM for proxmox on ZFS Raid1. All other free space on those disks I will combine to a single space with LVM and leave for the local backup purposes. For example OS will get 200GB, and the rest is for the VM machine backups.
1 x 500GB SSD enterprise grade for OSD. On that OSD virtual machine will be placed.
In the future I could put another SSD for one more OSD if the nodes resource won't be overused.

B.
1 SSD 250 (consumer grade) for Proxmox. Do I understand correctly that I don't need XFS raid for proxmox OS, because if the disk will be fault and this node won't function the cluster will power on the resources on other node. So I should not care that the system is just on single ssd. But doing such the node will function better because it on SSD and not on 2 HDD with slow 5400RPM speed.
1 x 500GB SSD enterprise grade for a single OSD. On that OSD virtual machine will be placed.
1 x 2TB HDD 5400RPM for the local VM backups.

p.s. I will have additional VM backup storage outside the server room.

#3 QUESTION:
And the last question should I go with KVM or OpenVZ technology in my current situation. Which is better when we are talking placing VM on Ceph storage?
My VM's will be up to 500GB the size is not big. I like the idea that KVM has it's own kernel, firewall configurations, and a single image file. But in every aspect I read that OpenVZ is performing more better.

I read a lot of about ceph. I don't want to make error in the planning process because it will affect me and my customers very greatly so the first of all I want to ask you guys to point me into the right direction. Thanks in advance.
 
Last edited:
16Gigs of RAM does not sound like a lot. Keep in mind that for each OSD you should calculate at least 1G of RAM + 1 CPU thread. Same goes for each service (mon, mgr, mds). If you then consider to use ZFS for the PVE installation, it will also use a bit of RAM. How many guests do you want to run on each node and how much RAM should they get? I assume that 16G will not be enough.

Network: Do you plan to add more NICs to the nodes? Because otherwise 2x 1Gbit is not enough. For Ceph you should use at least 10Gbit NICs. Then, expecially if you want to use HA, you should have a dedicated network (better 2, to different switches) for the PVE cluster communication (corosync). It is used in the HA functionality and if it does not work reliably, you will have problems like unexpected reboots of nodes if they lose the connection to the cluster and such.
 
Thanks for the answer. I plan to use onle 1 OSD, maybe up to 2 OSD and that is it. On every node I will use 1 guest (a single OpenVz or KVM machine, please advice me which is not so resource hungry on CEPH). This machine will have 10-12GB of RAM, so I can dedicate 6-4GB for the CEPH, Proxmox cluster... . Is this ok? I plan to use up to 50 nodes in the future with the same structure because my servers are quite weak.

I read almost everywhere that I need 10GB network, but according to my scenario, will I survice with 1GB network for up 8 nodes for begining? Later I will invest on 10G cards and switch when I add more nodes.

Do I understand correctly that having CEPH, and Proxmox cluster, I can run Proxmox OS on single drive, even on HDD (not SSD). If one of my nodes a single OS disk dies another machine is automatically will be powered up. So I safe even with a single disk in cluster scenario. Am I right?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!