pve7to8, ceph pacific-to-quincy how do I discover the fs_name?

ltgcc

Member
Dec 28, 2021
12
1
8
65
in https://pve.proxmox.com/wiki/Ceph_Pacific_to_Quincy

Disable standby_replay
ceph fs set <fs_name> allow_standby_replay false

Which....how do I discover the fs_name?

from https://docs.ceph.com/en/latest/cephfs/administration/

ceph fs ls

List all file systems by name.

when I do this as root on all my pve nodes:

root@pve3:~# ceph fs ls
No filesysystems enabled.

FWIW:

root@pve1:~# cat /etc/pve/storage.cfg
dir: local
path /var/lib/vz
content iso,vztmpl,backup

rbd: ceph-pve-pool
content rootdir,images
krbd 0
pool ceph-pve-pool

dir: USB_XFS_2T
path /media/USB_XFS_2T
content images
prune-backups keep-all=1
shared 0

root@pve1:~# cat /etc/pve/ceph.conf
[global]
auth_client_required = cephx
auth_cluster_required = cephx
auth_service_required = cephx
cluster_network = 10.55.10.0/28
fsid = 7cc47e3a-da1f-4d60-94df-449fef782cb6
mon_allow_pool_delete = true
mon_host = 10.55.10.2 10.55.10.3 10.55.10.4
ms_bind_ipv4 = true
ms_bind_ipv6 = false
osd_pool_default_min_size = 2
osd_pool_default_size = 3
public_network = 10.55.10.0/28

[client]
keyring = /etc/pve/priv/$cluster.$name.keyring

[mon.pve1]
public_addr = 10.55.10.2

[mon.pve2]
public_addr = 10.55.10.3

[mon.pve3]
public_addr = 10.55.10.4



I'm at a loss. What next?
thanks!


Followup:

looking at this later I did see this:

1716873628948.png

There is the same entry ceph-pve-pool on the other two nodes pve2 and pve3.

but when I tried "ceph-pve-pool":

root@pve1:~# ceph fs set ceph-pve-pool allow_standby_replay false
Error ENOENT: Filesystem not found: 'ceph-pve-pool'
 
Last edited:
Hi,
it seems like you don't have a CephFS configured, but are only using Ceph as an RBD storage (rather than as a file system). Your configuration also doesn't show any MDS daemons, only monitors. So you can skip that part of the upgrade guide.