Hi everyone,
In my case I have 7 pve node.
osd_pool_default_min_size = 2
osd_pool_default_size = 3
The pools cephfs_data & cephfs_metadata created when I Create CephFS.
The 7 node I add the last.
For now if check with CLI ceph df : cephfs_data + cephfs_metadata MAX AVAIL is 46 TiB , but the vm.pool 31 TiB
I don't understand why the last pool I create have MAX AVAIL space less than pool crated with Create CephFS.
Because for now the pool cephfs_data I don't use yet, if I destroy it like I read in : Destroy CephFS
I need ask if I destroy CephFS pool is will affect other my last pool (vm.pool) or affect the ceph storage ?
And the most importanrt if I destroy it MAX AVAIL space will increase for the pool -> vm.pool
If I need the cephfs_data pool I can it create again but after the destroy it before.
Proxmox VE version is :
In my case I have 7 pve node.
osd_pool_default_min_size = 2
osd_pool_default_size = 3
ceph osd pool autoscale-status
POOL SIZE TARGET SIZE RATE RAW CAPACITY RATIO TARGET RATIO EFFECTIVE RATIO BIAS PG_NUM NEW PG_NUM AUTOSCALE
device_health_metrics 15426k 2.0 106.4T 0.0000 1.0 1 on
vm.pool 2731G 3.0 106.4T 0.0752 1.0000 1.0000 1.0 512 on
cephfs_data 1978 2.0 106.4T 0.0000 1.0 32 on
cephfs_metadata 23799k 2.0 106.4T 0.0000 4.0 2 on
ceph df
--- RAW STORAGE ---
CLASS SIZE AVAIL USED RAW USED %RAW USED
hdd 106 TiB 98 TiB 8.1 TiB 8.1 TiB 7.60
TOTAL 106 TiB 98 TiB 8.1 TiB 8.1 TiB 7.60
--- POOLS ---
POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL
device_health_metrics 1 1 15 MiB 22 30 MiB 0 46 TiB
vm.pool 2 512 2.7 TiB 741.87k 8.0 TiB 8.01 31 TiB
cephfs_data 3 32 1.9 KiB 0 3.9 KiB 0 46 TiB
cephfs_metadata 4 2 23 MiB 28 48 MiB 0 46 TiB
rados df
POOL_NAME USED OBJECTS CLONES COPIES MISSING_ON_PRIMARY UNFOUND DEGRADED RD_OPS RD WR_OPS WR USED COMPR UNDER COMPR
cephfs_data 3.9 KiB 0 0 0 0 0 0 0 0 B 0 0 B 0 B 0 B
cephfs_metadata 48 MiB 28 0 56 0 0 0 0 0 B 14 13 KiB 0 B 0 B
device_health_metrics 30 MiB 22 0 44 0 0 0 22 44 KiB 22 231 KiB 0 B 0 B
vm.pool 8.0 TiB 741870 0 2225610 0 0 0 113390981 74 TiB 707422564 12 TiB 0 B 0 B
total_objects 741920
total_used 8.1 TiB
total_avail 98 TiB
total_space 106 TiB
The pools cephfs_data & cephfs_metadata created when I Create CephFS.
The 7 node I add the last.
For now if check with CLI ceph df : cephfs_data + cephfs_metadata MAX AVAIL is 46 TiB , but the vm.pool 31 TiB
I don't understand why the last pool I create have MAX AVAIL space less than pool crated with Create CephFS.
Because for now the pool cephfs_data I don't use yet, if I destroy it like I read in : Destroy CephFS
I need ask if I destroy CephFS pool is will affect other my last pool (vm.pool) or affect the ceph storage ?
And the most importanrt if I destroy it MAX AVAIL space will increase for the pool -> vm.pool
If I need the cephfs_data pool I can it create again but after the destroy it before.
Proxmox VE version is :
proxmox-ve: 6.4-1 (running kernel: 5.4.162-1-pve)
pve-manager: 6.4-13 (running version: 6.4-13/9f411e79)
pve-kernel-5.4: 6.4-12
pve-kernel-helper: 6.4-12
pve-kernel-5.4.162-1-pve: 5.4.162-2
pve-kernel-5.4.157-1-pve: 5.4.157-1
pve-kernel-5.4.106-1-pve: 5.4.106-1
ceph: 15.2.15-pve1~bpo10
ceph-fuse: 15.2.15-pve1~bpo10
corosync: 3.1.5-pve2~bpo10+1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: residual config
ifupdown2: 3.0.0-1+pve4~bpo10
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.22-pve2~bpo10+1
libproxmox-acme-perl: 1.1.0
libproxmox-backup-qemu0: 1.1.0-1
libpve-access-control: 6.4-3
libpve-apiclient-perl: 3.1-3
libpve-common-perl: 6.4-4
libpve-guest-common-perl: 3.1-5
libpve-http-server-perl: 3.2-3
libpve-storage-perl: 6.4-1
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.6-2
lxcfs: 4.0.6-pve1
novnc-pve: 1.1.0-1
openvswitch-switch: 2.12.3-1
proxmox-backup-client: 1.1.13-2
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.6-1
pve-cluster: 6.4-1
pve-container: 3.3-6
pve-docs: 6.4-2
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-4
pve-firmware: 3.3-2
pve-ha-manager: 3.1-1
pve-i18n: 2.3-1
pve-qemu-kvm: 5.2.0-6
pve-xtermjs: 4.7.0-3
qemu-server: 6.4-2
smartmontools: 7.2-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 2.0.7-pve1
Last edited: