Hi while setting up a new CephFS on proxmox 6 I keep seeing
do I need to use RBD and CephFS both at same time
I am trying to obtain a standby HA cluster both with identical local storage
my question is why do I need both CephFS *and* rdb - I need the data on both nodes in the event of failure
when trying to setup an MDS for CephFS I see for the MDS up:creating and the following in the log over and over
do I need even need this or just rdb?
what is the point of CephFS in a cluster can someone please explain
2019-08-31 16:22:34.198697 mgr.hypervisor01 (mgr.14102) 590 : cluster [DBG] pgmap v605: 250 pgs: 250 undersized+peered; 0 B data, 6.1 MiB used, 5.5 TiB / 5.5 TiB avail
2019-08-31 16:22:36.199704 mgr.hypervisor01 (mgr.14102) 591 : cluster [DBG] pgmap v606: 250 pgs: 250 undersized+peered; 0 B data, 6.1 MiB used, 5.5 TiB / 5.5 TiB avail
2019-08-31 16:22:38.200371 mgr.hypervisor01 (mgr.14102) 592 : cluster [DBG] pgmap v607: 250 pgs: 250 undersized+peered; 0 B data, 6.1 MiB used, 5.5 TiB / 5.5 TiB avail
2019-08-31 16:22:39.382685 mon.hypervisor01 (mon.0) 1148 : cluster [DBG] mds.? [v2:10.5.5.1:6818/1480777618,v1:10.5.5.1:6819/1480777618] up:creating
2019-08-31 16:22:39.382739 mon.hypervisor01 (mon.0) 1149 : cluster [DBG] fsmap cephfs:1 {0=hypervisor01=up:creating}
2019-08-31 16:22:42.916408 mon.hypervisor01 (mon.0) 1158 : cluster [DBG] mds.? [v2:10.5.5.1:6818/1480777618,v1:10.5.5.1:6819/1480777618] up:creating
2019-08-31 16:22:42.916454 mon.hypervisor01 (mon.0) 1159 : cluster [DBG] fsmap cephfs:1 {0=hypervisor01=up:creating}
2019-08-31 16:22:47.227064 mon.hypervisor01 (mon.0) 1163 : cluster [DBG] mds.? [v2:10.5.5.1:6818/1480777618,v1:10.5.5.1:6819/1480777618] up:creating
2019-08-31 16:22:47.227126 mon.hypervisor01 (mon.0) 1164 : cluster [DBG] fsmap cephfs:1 {0=hypervisor01=up:creating}
2019-08-31 16:22:40.201145 mgr.hypervisor01 (mgr.14102) 593 : cluster [DBG] pgmap v608: 250 pgs: 250 undersized+peered; 0 B data, 6.1 MiB used, 5.5 TiB / 5.5 TiB avail
2019-08-31 16:22:42.201923 mgr.hypervisor01 (mgr.14102) 594 : cluster [DBG] pgmap v609: 250 pgs: 250 undersized+peered; 0 B data, 6.1 MiB used, 5.5 TiB / 5.5 TiB avail
2019-08-31 16:22:44.202708 mgr.hypervisor01 (mgr.14102) 595 : cluster [DBG] pgmap v610: 250 pgs: 250 undersized+peered; 0 B data, 6.1 MiB used, 5.5 TiB / 5.5 TiB avail
2019-08-31 16:22:46.203664 mgr.hypervisor01 (mgr.14102) 596 : cluster [DBG] pgmap v611: 250 pgs: 250 undersized+peered; 0 B data, 6.1 MiB used, 5.5 TiB / 5.5 TiB avail
2019-08-31 16:22:48.204369 mgr.hypervisor01 (mgr.14102) 597 : cluster [DBG] pgmap v612: 250 pgs: 250 undersized+peered; 0 B data, 6.1 MiB used, 5.5 TiB / 5.5 TiB avail
2019-08-31 16:22:51.418931 mon.hypervisor01 (mon.0) 1165 : cluster [DBG] mds.? [v2:10.5.5.1:6818/1480777618,v1:10.5.5.1:6819/1480777618] up:creating
2019-08-31 16:22:51.418979 mon.hypervisor01 (mon.0) 1166 : cluster [DBG] fsmap cephfs:1 {0=hypervisor01=up:creating}
do I need to use RBD and CephFS both at same time
I am trying to obtain a standby HA cluster both with identical local storage
my question is why do I need both CephFS *and* rdb - I need the data on both nodes in the event of failure
when trying to setup an MDS for CephFS I see for the MDS up:creating and the following in the log over and over
do I need even need this or just rdb?
what is the point of CephFS in a cluster can someone please explain
2019-08-31 16:22:34.198697 mgr.hypervisor01 (mgr.14102) 590 : cluster [DBG] pgmap v605: 250 pgs: 250 undersized+peered; 0 B data, 6.1 MiB used, 5.5 TiB / 5.5 TiB avail
2019-08-31 16:22:36.199704 mgr.hypervisor01 (mgr.14102) 591 : cluster [DBG] pgmap v606: 250 pgs: 250 undersized+peered; 0 B data, 6.1 MiB used, 5.5 TiB / 5.5 TiB avail
2019-08-31 16:22:38.200371 mgr.hypervisor01 (mgr.14102) 592 : cluster [DBG] pgmap v607: 250 pgs: 250 undersized+peered; 0 B data, 6.1 MiB used, 5.5 TiB / 5.5 TiB avail
2019-08-31 16:22:39.382685 mon.hypervisor01 (mon.0) 1148 : cluster [DBG] mds.? [v2:10.5.5.1:6818/1480777618,v1:10.5.5.1:6819/1480777618] up:creating
2019-08-31 16:22:39.382739 mon.hypervisor01 (mon.0) 1149 : cluster [DBG] fsmap cephfs:1 {0=hypervisor01=up:creating}
2019-08-31 16:22:42.916408 mon.hypervisor01 (mon.0) 1158 : cluster [DBG] mds.? [v2:10.5.5.1:6818/1480777618,v1:10.5.5.1:6819/1480777618] up:creating
2019-08-31 16:22:42.916454 mon.hypervisor01 (mon.0) 1159 : cluster [DBG] fsmap cephfs:1 {0=hypervisor01=up:creating}
2019-08-31 16:22:47.227064 mon.hypervisor01 (mon.0) 1163 : cluster [DBG] mds.? [v2:10.5.5.1:6818/1480777618,v1:10.5.5.1:6819/1480777618] up:creating
2019-08-31 16:22:47.227126 mon.hypervisor01 (mon.0) 1164 : cluster [DBG] fsmap cephfs:1 {0=hypervisor01=up:creating}
2019-08-31 16:22:40.201145 mgr.hypervisor01 (mgr.14102) 593 : cluster [DBG] pgmap v608: 250 pgs: 250 undersized+peered; 0 B data, 6.1 MiB used, 5.5 TiB / 5.5 TiB avail
2019-08-31 16:22:42.201923 mgr.hypervisor01 (mgr.14102) 594 : cluster [DBG] pgmap v609: 250 pgs: 250 undersized+peered; 0 B data, 6.1 MiB used, 5.5 TiB / 5.5 TiB avail
2019-08-31 16:22:44.202708 mgr.hypervisor01 (mgr.14102) 595 : cluster [DBG] pgmap v610: 250 pgs: 250 undersized+peered; 0 B data, 6.1 MiB used, 5.5 TiB / 5.5 TiB avail
2019-08-31 16:22:46.203664 mgr.hypervisor01 (mgr.14102) 596 : cluster [DBG] pgmap v611: 250 pgs: 250 undersized+peered; 0 B data, 6.1 MiB used, 5.5 TiB / 5.5 TiB avail
2019-08-31 16:22:48.204369 mgr.hypervisor01 (mgr.14102) 597 : cluster [DBG] pgmap v612: 250 pgs: 250 undersized+peered; 0 B data, 6.1 MiB used, 5.5 TiB / 5.5 TiB avail
2019-08-31 16:22:51.418931 mon.hypervisor01 (mon.0) 1165 : cluster [DBG] mds.? [v2:10.5.5.1:6818/1480777618,v1:10.5.5.1:6819/1480777618] up:creating
2019-08-31 16:22:51.418979 mon.hypervisor01 (mon.0) 1166 : cluster [DBG] fsmap cephfs:1 {0=hypervisor01=up:creating}