Thank you Proxmox Team!
The Ceph integration is great. Works perfect out of the box!
With the next update including firewall the proxmox will handle everything in the infrastructure needed
I have installed a test proxmox CEPH cluster on our proxmox cluster. just 3 and then 4 nodes with 2 osds each. performance is ok for just being a virt install on existing proxmox cluster and only 1gbit network. i can still run a win2008 server for testing.
iops are like 3 standard sata disks and cristal diskmark reads says 40mb for reading (which seems to be the limit of 1 gig network if osds and mons are on the same 1g network..?)
i used it to test cases like removing and adding osds. turning of 1 node or even shutdown the whole cluster (stopping the vms) to check what happens to the ceph cluster. but i could not kill it so far. at the moment it looks really stable.
soon we will purchase our new hardware so meanwhile i will go on with testing.
i have some questions about live performance and configuration:
1) how many standard sata disks do i need to get at least 300MB read/write for a single vm (network will be 10g) with replication of 2 (we will start with 3 nodes))
2) how can i mange different pools in the proxmox gui (is it possible) 1 for ssds and one for sata? how do i tell a newly created osd which pool to join?
3) normally if i do replica = 2 ceph tries to store data in 2 different osd's on differnet hosts?
4) we will start with 10 bigger vms (win2008r2 ts) on the the ceph cluster. normal vms will have peaks (read / write) about 50mb and one will have peaks about 300mb+ (fileserver) we dont need to much iops (5-10mb 4k is ok) for the moment
5) how to configure the network to have: one network for the (extisting) proxmox cluster; one network for monitors; and one for osd's? in the howto on proxmox wiki it is just 1 network for osds and monitors...
6) what ratio are you using for osds and ssd journal? ceph talsk about around 1:5
7) what kind of ssds? (still have no experience with ssds on our servers...) which perform good?
8) switch config. using 2 10g switches and have 2 network cards for each node with rr. 1 network for osd one for monitors. third network with standars switch for the vms.... has anyone done this allready? or similar?
best regards
philipp