CephFS and multiple volumes - incorrect space displayed

dmcgough

Member
May 23, 2022
3
0
6
First, an actual look at the CLI for real usage. Two CephFS volumes set up, "media" and "archive".

root@host:~# ceph fs volume ls
[
{
"name": "media"
},
{
"name": "archive"
}
]
root@host:~# ceph fs volume info archive --human-readable
{
"mon_addrs": [
"192.168.3.4:6789",
"192.168.3.3:6789",
"192.168.3.2:6789"
],
"pools": {
"data": [
{
"avail": "10.0T",
"name": "archive-data",
"used": "7816G"
}
],
"metadata": [
{
"avail": " 721G",
"name": "archive-metadata",
"used": " 762M"
}
]
}
}
root@host:~# ceph fs volume info media --human-readable
{
"mon_addrs": [
"192.168.3.4:6789",
"192.168.3.3:6789",
"192.168.3.2:6789"
],
"pools": {
"data": [
{
"avail": "5128G",
"name": "media_data",
"used": "20.0T"
}
],
"metadata": [
{
"avail": " 721G",
"name": "media_metadata",
"used": " 806M"
}
]
}
}


1732025701828.png

1732025728863.png

Taken a few seconds apart while deleting data. It appears that the way PVE is parsing CephFS, it is reading the space details of both volumes and summing them? These two volumes are even on different pools on the backend...
 
Last edited: