One question about avail space.
If I try the command :
set fixed expected pool size
# ceph osd pool set MY_POOL_NAME target_size_bytes 60T ( or 50T)
or relative pool size (of full space)
# ceph osd pool set MY_POOL_NAME target_size_ratio .9
Is change the avail space on vm.pool?
Now I set 3/2 on other pools and ceph df show :
ceph df
--- RAW STORAGE ---
CLASS SIZE AVAIL USED RAW USED %RAW USED
hdd 106 TiB 98 TiB 8.1 TiB 8.1 TiB 7.62
TOTAL 106 TiB 98 TiB 8.1 TiB 8.1 TiB 7.62
--- POOLS ---
POOL ID PGS STORED...
Hi everyone,
In my case I have 7 pve node.
osd_pool_default_min_size = 2
osd_pool_default_size = 3
ceph osd pool autoscale-status
POOL SIZE TARGET SIZE RATE RAW CAPACITY RATIO TARGET RATIO EFFECTIVE RATIO BIAS PG_NUM NEW PG_NUM AUTOSCALE
device_health_metrics...
Hello,
You say the zfs-storage on the 3 nodes must have same name?
I have the
node1 , node2, node3 .
For now I have iscsi.share1 with zsf1.iscsi over iscsi support with big storage. I try to add zfs-storage zpool1 on all of nodes but is not work.
Can you suggest some other way ?
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.