Proper Maintenance of a Node

NoahD

Member
Jun 28, 2019
11
0
6
54
I had a scare with my cluster when I was doing firmware updates on one of my Dell PowerEdge R7415 servers which took 30 mins to complete. I had expected the Ceph cluster to recover on it's own but it caused some VMs to stall and two nodes became inaccessible. Once the downed node was back up everything slowly came backup and then everything was normal. I was able to locate the how to docs on proper Ceph node maintenance. The 10gig is running the cluster network.

The concern that I have if one of my nodes had a hardware failure and it basically goes offline how will Ceph handle this scenario without impacting my VMs that are part of HA? This is more of a concern when I am not aware this is happening middle of the night. Maybe some tweaks below could help?

The cluster consists of 4 nodes with total of 19 OSDs with 3/2 Size/Min and 256 PGs.

# begin crush map
tunable choose_local_tries 0
tunable choose_local_fallback_tries 0
tunable choose_total_tries 50
tunable chooseleaf_descend_once 1
tunable chooseleaf_vary_r 1
tunable chooseleaf_stable 1
tunable straw_calc_version 1
tunable allowed_bucket_algs 54

Below is typical of all of my 4 nodes:
# buckets
host pve4 {
id -3 # do not change unnecessarily
id -2 class hdd # do not change unnecessarily
# weight 23.287
alg straw2
hash 0 # rjenkins1
item osd.1 weight 7.277
item osd.4 weight 7.277
item osd.7 weight 0.546
item osd.10 weight 0.910
item osd.17 weight 7.277

# rules
rule replicated_rule {
id 0
type replicated
min_size 1
max_size 10
step take default
step chooseleaf firstn 0 type host
step emit


[global]
auth_client_required = cephx
auth_cluster_required = cephx
auth_service_required = cephx
cluster_network = 10.50.10.0/24
fsid = <<REMOVED>>
mon_allow_pool_delete = false
mon_host = 10.40.10.239 10.40.10.240 10.40.10.241 10.40.10.242
osd_journal_size = 5120
osd_pool_default_min_size = 2
osd_pool_default_size = 2
public_network = 10.40.10.0/24

[client]
keyring = /etc/pve/priv/$cluster.$name.keyring

[mds]
keyring = /var/lib/ceph/mds/ceph-$id/keyring

[mds.pve]
host = pve
mds_standby_for_name = pve

[mds.pve4]
host = pve4
mds_standby_for_name = pve

[mds.pve7]
host = pve7
mds_standby_for_name = pve

[mds.pve5]
host = pve5
mds_standby_for_name = pve

[mon.pve4]
host = pve4
mon_addr = 10.50.10.239:6789,10.40.10.239:6789

[mon.pve]
host = pve
mon_addr = 10.50.10.241:6789,10.40.10.241:6789

[mon.pve5]
host = pve5
mon_addr = 10.50.10.240:6789,10.40.10.240:6789
 
I just noticed in the ceph.conf that it was missing pve7 server to be listed as one of the 4 monitors even though I do see it as being active in the WebGUI. Don't know if adding this would have made a difference.

[mon.pve7]
host = pve7
mon_addr = 10.50.10.242:6789,10.40.10.242:6789
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!