Hi
I have a small cluster with 3 servers running Proxmox 5.1. Each Proxmox node is also used as CEPH node and each node has 2 OSDs installed.
There are two things which I don't understand:
1) Pools: How many pools should I setup. Should I create a pool per identical setting (size, min size, ...) - which would end in just one pool for me. Or should I create a pool for each type. Let's say one pool for VMs, one for containers, one for data,... Or should I create a pool for every VM and container? What is the advantage/disadvantage of having just one pool or multiple pools? Whats best practice here?
2) What I need is simple failure safety. The cluster should still operate when one server is offline. Therefore I would select a size of 2 and a min. size of 1 when creating a new pool in ceph. But from the ceph documentation:
They are talking about OSD fails. I have two OSDs in each server. Is it possible that the data is stored on two osds but on the same node? That would mean that when that node is down, the cluster wouldn't be able to run anymore or will ceph handle that correctly by not placing the copy on the same node?
Thanks for help in advance
Salzi
I have a small cluster with 3 servers running Proxmox 5.1. Each Proxmox node is also used as CEPH node and each node has 2 OSDs installed.
There are two things which I don't understand:
1) Pools: How many pools should I setup. Should I create a pool per identical setting (size, min size, ...) - which would end in just one pool for me. Or should I create a pool for each type. Let's say one pool for VMs, one for containers, one for data,... Or should I create a pool for every VM and container? What is the advantage/disadvantage of having just one pool or multiple pools? Whats best practice here?
2) What I need is simple failure safety. The cluster should still operate when one server is offline. Therefore I would select a size of 2 and a min. size of 1 when creating a new pool in ceph. But from the ceph documentation:
Resilience: You can set how many OSD are allowed to fail without losing data. For replicated pools, it is the desired number of copies/replicas of an object. A typical configuration stores an object and one additional copy (i.e., size = 2), but you can determine the number of copies/replicas.
They are talking about OSD fails. I have two OSDs in each server. Is it possible that the data is stored on two osds but on the same node? That would mean that when that node is down, the cluster wouldn't be able to run anymore or will ceph handle that correctly by not placing the copy on the same node?
Thanks for help in advance
Salzi