Problem with mds

Armando Ramos Roche

Well-Known Member
May 19, 2018
40
0
46
40
Hi all,
I have a cluster with 3 nodes (pve, pve1, pve2)
Here the version information:
Code:
root@pve:~# pveversion -v
proxmox-ve: 6.4-1 (running kernel: 5.4.157-1-pve)
pve-manager: 6.4-13 (running version: 6.4-13/9f411e79)
pve-kernel-5.4: 6.4-11
pve-kernel-helper: 6.4-11
pve-kernel-5.3: 6.1-6
pve-kernel-5.4.157-1-pve: 5.4.157-1
pve-kernel-5.4.143-1-pve: 5.4.143-1
pve-kernel-4.15: 5.4-8
pve-kernel-5.3.18-3-pve: 5.3.18-3
pve-kernel-4.15.18-20-pve: 4.15.18-46
pve-kernel-4.15.18-12-pve: 4.15.18-36
ceph: 15.2.15-pve1~bpo10
ceph-fuse: 15.2.15-pve1~bpo10
corosync: 3.1.5-pve2~bpo10+1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: residual config
ifupdown2: 3.0.0-1+pve4~bpo10
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.22-pve2~bpo10+1
libproxmox-acme-perl: 1.1.0
libproxmox-backup-qemu0: 1.1.0-1
libpve-access-control: 6.4-3
libpve-apiclient-perl: 3.1-3
libpve-common-perl: 6.4-4
libpve-guest-common-perl: 3.1-5
libpve-http-server-perl: 3.2-3
libpve-storage-perl: 6.4-1
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.6-2
lxcfs: 4.0.6-pve1
novnc-pve: 1.1.0-1
proxmox-backup-client: 1.1.13-2
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.6-1
pve-cluster: 6.4-1
pve-container: 3.3-6
pve-docs: 6.4-2
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-4
pve-firmware: 3.3-2
pve-ha-manager: 3.1-1
pve-i18n: 2.3-1
pve-qemu-kvm: 5.2.0-6
pve-xtermjs: 4.7.0-3
qemu-server: 6.4-2
smartmontools: 7.2-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 2.0.6-pve1~bpo10+1
Each of one have created the mds services (mds.pve, mds.pve1, mds.pve2)
But none of them come out as active.. always as standby..
1641926043481.png
 
But none of them come out as active.. always as standby..
I mean, you have no CephFS configured yet, so what would the MDS do? Remember that one MDS handles one CephFS, so once you create your first CephFS one should go from standby to active.
 
I mean, you have no CephFS configured yet, so what would the MDS do? Remember that one MDS handles one CephFS, so once you create your first CephFS one should go from standby to active.
Thanks a lot t.lamprecht
I create a CephFS but none become active, look:
1642082850586.png
And now. 1 of these are become state creating...
 
Can you post the output of
ceph -s and ceph versions?
 
Code:
root@pve:~# ceph -s
  cluster:
    id:     8bfacf0e-e4e2-4c1e-a4b4-a3978cbc0bc5
    health: HEALTH_WARN
            1 MDSs report slow metadata IOs
            Reduced data availability: 160 pgs inactive
            OSD count 0 < osd_pool_default_size 3

  services:
    mon: 3 daemons, quorum pve,pve2,pve1 (age 6h)
    mgr: pve1(active, since 2w), standbys: pve2, pve
    mds: cephfs:1 {0=pve=up:creating} 2 up:standby
    osd: 0 osds: 0 up, 0 in

  task status:

  data:
    pools:   2 pools, 160 pgs
    objects: 0 objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:     100.000% pgs unknown
             160 unknown