My storage capacity is decreasing while i'm restoring my backup

Douig

New Member
Mar 12, 2024
1
0
1
Hello,

I'm a student and i'm very new to the ceph storage and i know i'm missing a lot of things.
I run a 5 nodes proxmox+ceph cluster and i'm planning to add some more. In my cluster i'm i am limited by my means because, i only have one disk that I can use and there was already data on it that I couldn't delete, so I had to partition it and do that on each of my nodes, therefore the storage of my partitions is very inconsistent.
When i installed it, i created 5 OSDs (from each partitions) and i made a pool with the size/min of 3/2, i also have 5 monitor/manager and, from what i understood when you create a pool "size" means that there is 3 copies of each OSDs in my cluster, therfore if i add data in my ceph storage it will be duplicated three times.
In total i have 1.4Tb of storage but when i look at my ceph storage it tells me that i can only use half of it, and there is one thing i struggle to understand is when i'm adding data on my ceph storage my max capacity storage is decreasing, my guess it's because the data i'm adding has to be duplicated three times too, so i'm asking is there a way to know how many space my vm is really going to occupy ?
And also can you explain to me what PGs are please ?

I may be missing some information in my request and, of course any advice or upgrade of my cluster is very appreciated,

Thank you.

Bash:
root@pve:~# ceph df
--- RAW STORAGE ---
CLASS     SIZE    AVAIL    USED  RAW USED  %RAW USED
hdd    363 GiB  347 GiB  17 GiB    17 GiB       4.57
ssd    1.1 TiB  1.0 TiB  54 GiB    54 GiB       4.99
TOTAL  1.4 TiB  1.3 TiB  71 GiB    71 GiB       4.88
 
--- POOLS ---
POOL  ID  PGS   STORED  OBJECTS     USED  %USED  MAX AVAIL
.mgr   1    1  2.3 MiB        2  6.8 MiB      0    424 GiB
SC     4   32   21 GiB    5.50k   64 GiB   4.81    424 GiB
root@pve:~# rados df
POOL_NAME     USED  OBJECTS  CLONES  COPIES  MISSING_ON_PRIMARY  UNFOUND  DEGRADED  RD_OPS       RD  WR_OPS      WR  USED COMPR  UNDER COMPR
.mgr       6.8 MiB        2       0       6                   0        0         0    1271  2.5 MiB     989  16 MiB         0 B          0 B
SC          64 GiB     5500       0   16500                   0        0         0   33936   44 MiB   40965  22 GiB         0 B          0 B

total_objects    5502
total_used       71 GiB
total_avail      1.3 TiB
total_space      1.4 TiB
root@pve:~# ceph osd df tree
ID   CLASS  WEIGHT   REWEIGHT  SIZE     RAW USE  DATA     OMAP    META     AVAIL    %USE  VAR   PGS  STATUS  TYPE NAME
 -1         1.41435         -  1.4 TiB   73 GiB   67 GiB  14 KiB  6.4 GiB  1.3 TiB  5.06  1.00    -          root default
-16         0.35489         -  363 GiB   17 GiB   16 GiB     0 B  1.4 GiB  346 GiB  4.73  0.94    -              host PVE5
  4    hdd  0.35489   1.00000  363 GiB   17 GiB   16 GiB     0 B  1.4 GiB  346 GiB  4.73  0.94   24      up          osd.4
 -3         0.09850         -  101 GiB  7.4 GiB  6.2 GiB   9 KiB  1.2 GiB   93 GiB  7.32  1.45    -              host pve
  0    ssd  0.09850   1.00000  101 GiB  7.4 GiB  6.2 GiB   9 KiB  1.2 GiB   93 GiB  7.32  1.45    9      up          osd.0
 -5         0.56949         -  583 GiB   24 GiB   22 GiB     0 B  1.3 GiB  560 GiB  4.05  0.80    -              host pve2
  3    ssd  0.56949   1.00000  583 GiB   24 GiB   22 GiB     0 B  1.3 GiB  560 GiB  4.05  0.80   33      up          osd.3
 -7         0.19609         -  201 GiB   12 GiB   11 GiB     0 B  1.3 GiB  188 GiB  6.20  1.23    -              host pve3
  2    ssd  0.19609   1.00000  201 GiB   12 GiB   11 GiB     0 B  1.3 GiB  188 GiB  6.20  1.23   17      up          osd.2
 -9         0.19539         -  200 GiB   13 GiB   11 GiB   5 KiB  1.2 GiB  188 GiB  6.29  1.24    -              host pve4
  1    ssd  0.19539   1.00000  200 GiB   13 GiB   11 GiB   5 KiB  1.2 GiB  188 GiB  6.29  1.24   16      up          osd.1
                        TOTAL  1.4 TiB   73 GiB   67 GiB  15 KiB  6.4 GiB  1.3 TiB  5.06                              
MIN/MAX VAR: 0.80/1.45  STDDEV: 1.35

and i also had this error and i didn't really know what it meant

Code:
User
WARNING: You have not turned on protection against thin pools running out of space.
  WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full.
  Logical volume "vm-104-disk-0" created.
  WARNING: Sum of all thin volume sizes (177.00 GiB) exceeds the size of thin pool pve/data and the amount of free space in volume group (16.00 GiB).
WARN: no efidisk configured! Using temporary efivars disk.
QEMU: kvm: cannot set up guest memory 'pc.ram': Cannot allocate memory

TASK ERROR: start failed: QEMU exited with code 1
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!