Hi!
I installed for 3 node: Proxmox host: 2x1 HDD RAID 1 for host and 1x 1 TB HDD for Ceph.
First node is the strongest, others are slower CPU / less RAM.
node 2 and 3 only for ceph backup, tested with live migration, worked with all of 3 node.
But I want use node 1 as nas, database server, monitoring.
I installing for 2 vm all i need. All in ceph.
All settings by default ceph and proxmox settings.
What I need to do to allow ceph working alone with node 1? I know, its risky and not nice.
Or possible to be node 2 will be same on 24/7. It's better? Will work fine the ceph?
If I turn on node 2 and node 3, it sync back and it's ok, but if node 2 and node 3 disabled, no writes for ceph in node 1.
config parts:
[global]
auth_client_required = cephx
auth_cluster_required = cephx
auth_service_required = cephx
mon_allow_pool_delete = true
...
osd_pool_default_min_size = 2
osd_pool_default_size = 3
...
...
# begin crush map
tunable choose_local_tries 0
tunable choose_local_fallback_tries 0
tunable choose_total_tries 50
tunable chooseleaf_descend_once 1
tunable chooseleaf_vary_r 1
tunable chooseleaf_stable 1
tunable straw_calc_version 1
tunable allowed_bucket_algs 54
# devices
device 0 osd.0 class hdd
device 1 osd.1 class hdd
device 2 osd.2 class hdd
...
...
# rules
rule replicated_rule {
id 0
type replicated
min_size 1
max_size 10
step take default
step chooseleaf firstn 0 type host
step emit
}
# end crush map
...
Thank you.
I installed for 3 node: Proxmox host: 2x1 HDD RAID 1 for host and 1x 1 TB HDD for Ceph.
First node is the strongest, others are slower CPU / less RAM.
node 2 and 3 only for ceph backup, tested with live migration, worked with all of 3 node.
But I want use node 1 as nas, database server, monitoring.
I installing for 2 vm all i need. All in ceph.
All settings by default ceph and proxmox settings.
What I need to do to allow ceph working alone with node 1? I know, its risky and not nice.
Or possible to be node 2 will be same on 24/7. It's better? Will work fine the ceph?
If I turn on node 2 and node 3, it sync back and it's ok, but if node 2 and node 3 disabled, no writes for ceph in node 1.
config parts:
[global]
auth_client_required = cephx
auth_cluster_required = cephx
auth_service_required = cephx
mon_allow_pool_delete = true
...
osd_pool_default_min_size = 2
osd_pool_default_size = 3
...
...
# begin crush map
tunable choose_local_tries 0
tunable choose_local_fallback_tries 0
tunable choose_total_tries 50
tunable chooseleaf_descend_once 1
tunable chooseleaf_vary_r 1
tunable chooseleaf_stable 1
tunable straw_calc_version 1
tunable allowed_bucket_algs 54
# devices
device 0 osd.0 class hdd
device 1 osd.1 class hdd
device 2 osd.2 class hdd
...
...
# rules
rule replicated_rule {
id 0
type replicated
min_size 1
max_size 10
step take default
step chooseleaf firstn 0 type host
step emit
}
# end crush map
...
Thank you.