Greetings.
We are about to get Proxmox for out new cluster solution (see image below), and i need to make sure i understand this completely before we commit to this investment. The Backup server will be full image backup, to a backup server running alongside the Proxmox servers. The backup server will be mirroring using glusterfs to a offsite data center for disaster recovery.
At first, lets say two SSD per. server. with the default settings from the wiki guide on how to setup ceph.
One OSD per. disk, One monitor pr server. Size 3/ minimum 2 , pg 128. So my questions are:
-Does ceph understand to spread the data over 3 server, minimum 2, and not put the data on multiple ssd's on the same server and call it a day, and then the server fails ? (i am guessing this is thought out)
-When i need to add storage to Ceph, can i just disks, create new OSD's on those disks, and they will be usable to the exsisting pool?
-What does the "add storage" do when creating a pool? is this needed when adding more disks?
-When i create a pool with size 3 and minimum 2, does this mean that pool occupy 3 OSD's, and only use them? or is it spread out over the entire cluster?
The reason i ask is because the video in the wiki regarding ceph setup, he creates two pool (one for images, and one for containers) and he has 3 servers with 2 disks in each, and if i try to create two pools in my current setup with 3 server and 1 disk in each, i get
mon_command failed - pg_num 128 size 3 would mean 768 total pgs, which exceeds max 750 (mon_max_pg_per_osd 250 * num_in_osds 3)wqU"
Still trying to figure out how the placement groups, size and minimum in ceph relates to the numbers in the ceph calculator below:
ceph.com/pgcalc/
Thanks in advance
We are about to get Proxmox for out new cluster solution (see image below), and i need to make sure i understand this completely before we commit to this investment. The Backup server will be full image backup, to a backup server running alongside the Proxmox servers. The backup server will be mirroring using glusterfs to a offsite data center for disaster recovery.
At first, lets say two SSD per. server. with the default settings from the wiki guide on how to setup ceph.
One OSD per. disk, One monitor pr server. Size 3/ minimum 2 , pg 128. So my questions are:
-Does ceph understand to spread the data over 3 server, minimum 2, and not put the data on multiple ssd's on the same server and call it a day, and then the server fails ? (i am guessing this is thought out)
-When i need to add storage to Ceph, can i just disks, create new OSD's on those disks, and they will be usable to the exsisting pool?
-What does the "add storage" do when creating a pool? is this needed when adding more disks?
-When i create a pool with size 3 and minimum 2, does this mean that pool occupy 3 OSD's, and only use them? or is it spread out over the entire cluster?
The reason i ask is because the video in the wiki regarding ceph setup, he creates two pool (one for images, and one for containers) and he has 3 servers with 2 disks in each, and if i try to create two pools in my current setup with 3 server and 1 disk in each, i get
mon_command failed - pg_num 128 size 3 would mean 768 total pgs, which exceeds max 750 (mon_max_pg_per_osd 250 * num_in_osds 3)wqU"
Still trying to figure out how the placement groups, size and minimum in ceph relates to the numbers in the ceph calculator below:
ceph.com/pgcalc/
Thanks in advance