Hi,
We have a 4 node proxmox cluster that I just updated to proxmox 5.3 (from 5.2) without any problems.
Now I want to test the new CephFS support in Proxmox 5.3 , but after I add it via the storage menu in the webinterface the cephfs storage entry only has a grey question mark on it.
The syslog contains the following errors:
After these errors show up the cephfs is mounted in /mnt/pve/..., but there wasn't anything mounted on that path before trying to add cephfs via the webif.
We already had cephfs running before the update to 5.3 (but not mounted locally).
Maybe this is the problem? But how can I get my existing cephfs to show up as a storage location?
Here is the output of "ceph -s" and "ceph fs status".
(I know that there is a health warning and I wanted to get rid of it by moving the locally stored ISOs of pve1 to cephfs now that it's supported as an ISO storage location)
regards
max
We have a 4 node proxmox cluster that I just updated to proxmox 5.3 (from 5.2) without any problems.
Now I want to test the new CephFS support in Proxmox 5.3 , but after I add it via the storage menu in the webinterface the cephfs storage entry only has a grey question mark on it.
The syslog contains the following errors:
Code:
Dec 27 16:02:29 pve1 pvestatd[16591]: A filesystem is already mounted on /mnt/pve/cephfs
Dec 27 16:02:29 pve1 pvestatd[16591]: Use of uninitialized value in sort at /usr/share/perl5/PVE/Storage/CephTools.pm line 61.
Dec 27 16:02:29 pve1 pvestatd[16591]: Use of uninitialized value in sort at /usr/share/perl5/PVE/Storage/CephTools.pm line 61.
Dec 27 16:02:29 pve1 pvestatd[16591]: Use of uninitialized value in sort at /usr/share/perl5/PVE/Storage/CephTools.pm line 61.
Dec 27 16:02:29 pve1 pvestatd[16591]: Use of uninitialized value in join or string at /usr/share/perl5/PVE/Storage/CephTools.pm line 63.
Dec 27 16:02:29 pve1 pvestatd[16591]: mount error: exit code 16
After these errors show up the cephfs is mounted in /mnt/pve/..., but there wasn't anything mounted on that path before trying to add cephfs via the webif.
We already had cephfs running before the update to 5.3 (but not mounted locally).
Maybe this is the problem? But how can I get my existing cephfs to show up as a storage location?
Here is the output of "ceph -s" and "ceph fs status".
(I know that there is a health warning and I wanted to get rid of it by moving the locally stored ISOs of pve1 to cephfs now that it's supported as an ISO storage location)
Code:
root@pve1:~# ceph -s
cluster:
id: e9f42f14-bed0-4839-894b-0ca3e598320e
health: HEALTH_WARN
mon pve1 is low on available space
services:
mon: 3 daemons, quorum pve1,pve2,pve3
mgr: pve1(active), standbys: pve3, pve2
mds: cephfs-1/1/1 up {0=pve1=up:active}
osd: 48 osds: 48 up, 48 in
data:
pools: 10 pools, 3128 pgs
objects: 5.67M objects, 17.3TiB
usage: 52.1TiB used, 297TiB / 349TiB avail
pgs: 3128 active+clean
io:
client: 676B/s rd, 21.0KiB/s wr, 0op/s rd, 2op/s wr
Code:
root@pve1:~# ceph fs status
cephfs - 0 clients
======
+------+--------+------+---------------+-------+-------+
| Rank | State | MDS | Activity | dns | inos |
+------+--------+------+---------------+-------+-------+
| 0 | active | pve1 | Reqs: 0 /s | 41.0k | 41.0k |
+------+--------+------+---------------+-------+-------+
+-------------+----------+-------+-------+
| Pool | type | used | avail |
+-------------+----------+-------+-------+
| cephfs_meta | metadata | 189M | 89.2T |
| cephfs_data | data | 2059G | 89.2T |
+-------------+----------+-------+-------+
+-------------+
| Standby MDS |
+-------------+
+-------------+
MDS version: ceph version 12.2.10 (fc2b1783e3727b66315cc667af9d663d30fe7ed4) luminous (stable)
regards
max
Last edited: