Ceph Strangeness

jer1981

Member
Aug 31, 2019
13
0
21
43
Hi while setting up a new CephFS on proxmox 6 I keep seeing

do I need to use RBD and CephFS both at same time
I am trying to obtain a standby HA cluster both with identical local storage

my question is why do I need both CephFS *and* rdb - I need the data on both nodes in the event of failure

when trying to setup an MDS for CephFS I see for the MDS up:creating and the following in the log over and over
do I need even need this or just rdb?
what is the point of CephFS in a cluster can someone please explain

2019-08-31 16:22:34.198697 mgr.hypervisor01 (mgr.14102) 590 : cluster [DBG] pgmap v605: 250 pgs: 250 undersized+peered; 0 B data, 6.1 MiB used, 5.5 TiB / 5.5 TiB avail
2019-08-31 16:22:36.199704 mgr.hypervisor01 (mgr.14102) 591 : cluster [DBG] pgmap v606: 250 pgs: 250 undersized+peered; 0 B data, 6.1 MiB used, 5.5 TiB / 5.5 TiB avail
2019-08-31 16:22:38.200371 mgr.hypervisor01 (mgr.14102) 592 : cluster [DBG] pgmap v607: 250 pgs: 250 undersized+peered; 0 B data, 6.1 MiB used, 5.5 TiB / 5.5 TiB avail
2019-08-31 16:22:39.382685 mon.hypervisor01 (mon.0) 1148 : cluster [DBG] mds.? [v2:10.5.5.1:6818/1480777618,v1:10.5.5.1:6819/1480777618] up:creating
2019-08-31 16:22:39.382739 mon.hypervisor01 (mon.0) 1149 : cluster [DBG] fsmap cephfs:1 {0=hypervisor01=up:creating}
2019-08-31 16:22:42.916408 mon.hypervisor01 (mon.0) 1158 : cluster [DBG] mds.? [v2:10.5.5.1:6818/1480777618,v1:10.5.5.1:6819/1480777618] up:creating
2019-08-31 16:22:42.916454 mon.hypervisor01 (mon.0) 1159 : cluster [DBG] fsmap cephfs:1 {0=hypervisor01=up:creating}
2019-08-31 16:22:47.227064 mon.hypervisor01 (mon.0) 1163 : cluster [DBG] mds.? [v2:10.5.5.1:6818/1480777618,v1:10.5.5.1:6819/1480777618] up:creating
2019-08-31 16:22:47.227126 mon.hypervisor01 (mon.0) 1164 : cluster [DBG] fsmap cephfs:1 {0=hypervisor01=up:creating}
2019-08-31 16:22:40.201145 mgr.hypervisor01 (mgr.14102) 593 : cluster [DBG] pgmap v608: 250 pgs: 250 undersized+peered; 0 B data, 6.1 MiB used, 5.5 TiB / 5.5 TiB avail
2019-08-31 16:22:42.201923 mgr.hypervisor01 (mgr.14102) 594 : cluster [DBG] pgmap v609: 250 pgs: 250 undersized+peered; 0 B data, 6.1 MiB used, 5.5 TiB / 5.5 TiB avail
2019-08-31 16:22:44.202708 mgr.hypervisor01 (mgr.14102) 595 : cluster [DBG] pgmap v610: 250 pgs: 250 undersized+peered; 0 B data, 6.1 MiB used, 5.5 TiB / 5.5 TiB avail
2019-08-31 16:22:46.203664 mgr.hypervisor01 (mgr.14102) 596 : cluster [DBG] pgmap v611: 250 pgs: 250 undersized+peered; 0 B data, 6.1 MiB used, 5.5 TiB / 5.5 TiB avail
2019-08-31 16:22:48.204369 mgr.hypervisor01 (mgr.14102) 597 : cluster [DBG] pgmap v612: 250 pgs: 250 undersized+peered; 0 B data, 6.1 MiB used, 5.5 TiB / 5.5 TiB avail
2019-08-31 16:22:51.418931 mon.hypervisor01 (mon.0) 1165 : cluster [DBG] mds.? [v2:10.5.5.1:6818/1480777618,v1:10.5.5.1:6819/1480777618] up:creating
2019-08-31 16:22:51.418979 mon.hypervisor01 (mon.0) 1166 : cluster [DBG] fsmap cephfs:1 {0=hypervisor01=up:creating}
 
my question is why do I need both CephFS *and* rdb - I need the data on both nodes in the event of failure
Either type of storage is distributed, they all rely all on RADOS (Reliable Autonomic Distributed Object Store), details can be found in Ceph's architecture guide [0].

The upfront difference of those two is, that RBD provides a block device, while CephFS provides a filesystem. Both can coexist on the cluster and usually serve different purposes. See our storage manager for details [1]. For VMs/CTs you want to use RBD, while for backups you may use CephFS.

[0] https://docs.ceph.com/docs/nautilus/architecture/?highlight=rados
[1] https://pve.proxmox.com/pve-docs/chapter-pvesm.html#ceph_rados_block_devices
 
For now RDB seems like the way to go
So I do not need an MDS for RDB
Having 2 OSD per nodes is giving me a max pg warning how can use all space of both drives for the cluster without having to reduce pg too much or can I create multiple osd across the disks so I do not run into this issue they are only 3TB in size each please advise Thankyou kindly
 
Currently I only have 2 drive per node I can allocate to RDB both 3TB each and I wish to maximize the space I can obtain from the disks
 
Having 2 OSD per nodes is giving me a max pg warning how can use all space of both drives for the cluster without having to reduce pg too much or can I create multiple osd across the disks so I do not run into this issue they are only 3TB in size each please advise Thankyou kindly
The PGs (placement groups) are a logical unit that holds multiple objects and the same PG is distributed on multiple OSDs. Depending on how many hosts and OSDs you have the PG count needs adjustment. Run it through the pgclac [0] to see what number a pool needs.

And in general, have a look at our docs [1].

[0] https://ceph.com/pgcalc/
[1] https://pve.proxmox.com/pve-docs/chapter-pveceph.html
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!