Hi, I plan to build my first ceph cluster and have some newbie questions. In the beginning I will start with 5 nodes, and plan to reach 50 nodes.
Those nodes quite old (CPU E3,16GB RAM, 2x1Gbps network), so I think to gain the performance in adding more nodes but not upgrading RAM or CPU.
I have 4 ports sata on motherboard, and don't plan to invest in HBA expanders. Could you please advice me what kind of disk layout I should choose.
I plan to have 1 OSD on 500GB SSD enterprise drive. On each node I will put KVM or OpenVZ machine which will reside on OSD. Machines will be used for the shared webhosting. So Apache, MySQL, DNS, Postfix, POP, IMAP, webmail will be put in each machine - this means a lot of small reads and writes in the disk, so the IOPS is very important. In case of a node failure, cluster should power on the same VM from another node.
#1 QUESTION:
Please tell me am I in the right way thinking about the Proxmox cluster for such kind of service? I want HA, performance and easy management for such service.
I the most important it will give the peace of mind that if one server is down, it will start automatically and I the damaged one I will repair when I will have more spare time.
#2 QUESTION:
The second part is disk layout. Could you advise what to pick. I have plenty HDD disks, but every SSD I will need to buy. And I'm very tied on the budget.
Here is my variants:
A.
2 x 1TB HDD 5400RPM for proxmox on ZFS Raid1. All other free space on those disks I will combine to a single space with LVM and leave for the local backup purposes. For example OS will get 200GB, and the rest is for the VM machine backups.
1 x 500GB SSD enterprise grade for OSD. On that OSD virtual machine will be placed.
In the future I could put another SSD for one more OSD if the nodes resource won't be overused.
B.
1 SSD 250 (consumer grade) for Proxmox. Do I understand correctly that I don't need XFS raid for proxmox OS, because if the disk will be fault and this node won't function the cluster will power on the resources on other node. So I should not care that the system is just on single ssd. But doing such the node will function better because it on SSD and not on 2 HDD with slow 5400RPM speed.
1 x 500GB SSD enterprise grade for a single OSD. On that OSD virtual machine will be placed.
1 x 2TB HDD 5400RPM for the local VM backups.
p.s. I will have additional VM backup storage outside the server room.
#3 QUESTION:
And the last question should I go with KVM or OpenVZ technology in my current situation. Which is better when we are talking placing VM on Ceph storage?
My VM's will be up to 500GB the size is not big. I like the idea that KVM has it's own kernel, firewall configurations, and a single image file. But in every aspect I read that OpenVZ is performing more better.
I read a lot of about ceph. I don't want to make error in the planning process because it will affect me and my customers very greatly so the first of all I want to ask you guys to point me into the right direction. Thanks in advance.
Those nodes quite old (CPU E3,16GB RAM, 2x1Gbps network), so I think to gain the performance in adding more nodes but not upgrading RAM or CPU.
I have 4 ports sata on motherboard, and don't plan to invest in HBA expanders. Could you please advice me what kind of disk layout I should choose.
I plan to have 1 OSD on 500GB SSD enterprise drive. On each node I will put KVM or OpenVZ machine which will reside on OSD. Machines will be used for the shared webhosting. So Apache, MySQL, DNS, Postfix, POP, IMAP, webmail will be put in each machine - this means a lot of small reads and writes in the disk, so the IOPS is very important. In case of a node failure, cluster should power on the same VM from another node.
#1 QUESTION:
Please tell me am I in the right way thinking about the Proxmox cluster for such kind of service? I want HA, performance and easy management for such service.
I the most important it will give the peace of mind that if one server is down, it will start automatically and I the damaged one I will repair when I will have more spare time.
#2 QUESTION:
The second part is disk layout. Could you advise what to pick. I have plenty HDD disks, but every SSD I will need to buy. And I'm very tied on the budget.
Here is my variants:
A.
2 x 1TB HDD 5400RPM for proxmox on ZFS Raid1. All other free space on those disks I will combine to a single space with LVM and leave for the local backup purposes. For example OS will get 200GB, and the rest is for the VM machine backups.
1 x 500GB SSD enterprise grade for OSD. On that OSD virtual machine will be placed.
In the future I could put another SSD for one more OSD if the nodes resource won't be overused.
B.
1 SSD 250 (consumer grade) for Proxmox. Do I understand correctly that I don't need XFS raid for proxmox OS, because if the disk will be fault and this node won't function the cluster will power on the resources on other node. So I should not care that the system is just on single ssd. But doing such the node will function better because it on SSD and not on 2 HDD with slow 5400RPM speed.
1 x 500GB SSD enterprise grade for a single OSD. On that OSD virtual machine will be placed.
1 x 2TB HDD 5400RPM for the local VM backups.
p.s. I will have additional VM backup storage outside the server room.
#3 QUESTION:
And the last question should I go with KVM or OpenVZ technology in my current situation. Which is better when we are talking placing VM on Ceph storage?
My VM's will be up to 500GB the size is not big. I like the idea that KVM has it's own kernel, firewall configurations, and a single image file. But in every aspect I read that OpenVZ is performing more better.
I read a lot of about ceph. I don't want to make error in the planning process because it will affect me and my customers very greatly so the first of all I want to ask you guys to point me into the right direction. Thanks in advance.
Last edited: