Documentation bug and I'm unable to create CephFS

Madhatter

Active Member
Apr 8, 2012
35
1
28
One minor but important observation to please fix in the documentation.
on https://pve.proxmox.com/wiki/Manage_Ceph_Services_on_Proxmox_VE_Nodes
at
Destroy CephFS

the command
ceph rm fs NAME --yes-i-really-mean-it
should be
ceph fs rm NAME --yes-i-really-mean-it

However,
I have an up and running ceph but creating a cephfs doesn't work.

pveversion
pve-manager/5.3-5/97ae681d (running kernel: 4.15.18-9-pve)

MDS is created
root@proxmox:~# ceph mds stat
, 1 up:standby

creating a cephfs

root@proxmox:~# pveceph fs create --pg_num 128 --add-storage
creating data pool 'cephfs_data'...
creating metadata pool 'cephfs_metadata'...
configuring new CephFS 'cephfs'
Successfully create CephFS 'cephfs'
Adding 'cephfs' to storage configuration...
Waiting for an MDS to become active
Waiting for an MDS to become active
Waiting for an MDS to become active
Waiting for an MDS to become active
Waiting for an MDS to become active
Waiting for an MDS to become active
Waiting for an MDS to become active
Waiting for an MDS to become active
Waiting for an MDS to become active
Waiting for an MDS to become active
Need MDS to add storage, but none got active!


root@proxmox:~# ceph mds stat
cephfs-1/1/1 up {0=proxmox=up:creating}

Anyone with any idea, please ?
 

Alwin

Proxmox Retired Staff
Retired Staff
Aug 1, 2017
4,617
457
88

Madhatter

Active Member
Apr 8, 2012
35
1
28
Unchanged since yesterday
root@proxmox:~# ceph mds stat
cephfs-1/1/1 up {0=proxmox=up:creating}


root@proxmox:~# ceph fs status
cephfs - 0 clients
======
+------+----------+---------+----------+-------+-------+
| Rank | State | MDS | Activity | dns | inos |
+------+----------+---------+----------+-------+-------+
| 0 | creating | proxmox | | 0 | 0 |
+------+----------+---------+----------+-------+-------+
+-----------------+----------+-------+-------+
| Pool | type | used | avail |
+-----------------+----------+-------+-------+
| cephfs_metadata | metadata | 0 | 589G |
| cephfs_data | data | 0 | 589G |
+-----------------+----------+-------+-------+
+-------------+
| Standby MDS |
+-------------+
+-------------+
MDS version: ceph version 12.2.8 (6f01265ca03a6b9d7f3b7f759d8894bb9dbb6840) luminous (stable)

I'm trying to get my head around the correct pgs counts. can this be an issue?
cluster [WRN] Health check update: Degraded data redundancy: 108 pgs undersized (PG_DEGRADED)
 

Alwin

Proxmox Retired Staff
Retired Staff
Aug 1, 2017
4,617
457
88
I'm trying to get my head around the correct pgs counts. can this be an issue?
cluster [WRN] Health check update: Degraded data redundancy: 108 pgs undersized (PG_DEGRADED)
That may be why the MDS isn't starting up, you will find more information in the ceph logs or in the journal. Anyway the pool replication needs to be solved. I guess, there are not enough domains (default host level) for the replication available.
 

wylde

New Member
Oct 6, 2015
11
0
1
I'm using 5.3-9 and i'm also trying to create a cephfs. I have 1 ceph mds and cephs health is OK .. 3 monitors as instructed across 8 nodes with 2 osds each.
root@blah:~# ceph mds stat
, 1 up:standby
root@blah:~# pveceph fs create --pg_num 128 --add-storage
creating data pool 'cephfs_data'...
mon_command failed - error parsing integer value '': Expected option value to be integer, got ''in"}

Any suggestions on how to win this battle?

Thanks
 

Alwin

Proxmox Retired Staff
Retired Staff
Aug 1, 2017
4,617
457
88
What package version are you on (pveversion -v)? What pools do exist on your cluster?
 
Aug 7, 2018
23
1
23
122
I can confirm the error wylde is getting on a new cluster I'm building, a previous installation did not give this error, we update the machine every week and for today many ceph packages are about to be updated.

cephfs_data pool is created however the metadata pool is not.

pveversion -v output is:

Code:
proxmox-ve: 5.3-1 (running kernel: 4.15.18-11-pve)
pve-manager: 5.3-9 (running version: 5.3-9/ba817b29)
pve-kernel-4.15: 5.3-2
pve-kernel-4.15.18-11-pve: 4.15.18-33
ceph: 12.2.11-pve1
corosync: 2.4.4-pve1
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: not correctly installed
libjs-extjs: 6.0.1-2
libpve-access-control: 5.1-3
libpve-apiclient-perl: 2.0-5
libpve-common-perl: 5.0-46
libpve-guest-common-perl: 2.0-20
libpve-http-server-perl: 2.0-11
libpve-storage-perl: 5.0-38
libqb0: 1.0.3-1~bpo9
lvm2: 2.02.168-pve6
lxc-pve: 3.1.0-3
lxcfs: 3.0.3-pve1
novnc-pve: 1.0.0-2
openvswitch-switch: 2.7.0-3
proxmox-widget-toolkit: 1.0-22
pve-cluster: 5.0-33
pve-container: 2.0-34
pve-docs: 5.3-2
pve-edk2-firmware: 1.20181023-1
pve-firewall: 3.0-17
pve-firmware: 2.0-6
pve-ha-manager: 2.0-6
pve-i18n: 1.0-9
pve-libspice-server1: 0.14.1-2
pve-qemu-kvm: 2.12.1-1
pve-xtermjs: 3.10.1-1
qemu-server: 5.0-46
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!