Search results

  1. L

    [SOLVED] Ceph & cephfs_data Pools

    Thank you aaron, now I understand all.
  2. L

    [SOLVED] Ceph & cephfs_data Pools

    One question about avail space. If I try the command : set fixed expected pool size # ceph osd pool set MY_POOL_NAME target_size_bytes 60T ( or 50T) or relative pool size (of full space) # ceph osd pool set MY_POOL_NAME target_size_ratio .9 Is change the avail space on vm.pool?
  3. L

    [SOLVED] Ceph & cephfs_data Pools

    Thanks. Now I understand. Please tell my when you see the on the vm.pool replication size to 4?
  4. L

    [SOLVED] Ceph & cephfs_data Pools

    Now I set 3/2 on other pools and ceph df show : ceph df --- RAW STORAGE --- CLASS SIZE AVAIL USED RAW USED %RAW USED hdd 106 TiB 98 TiB 8.1 TiB 8.1 TiB 7.62 TOTAL 106 TiB 98 TiB 8.1 TiB 8.1 TiB 7.62 --- POOLS --- POOL ID PGS STORED...
  5. L

    [SOLVED] Ceph & cephfs_data Pools

    If I understand correctly: If I set up other pools with a size of 3/Min. Size 2 space, then the total space can change ?
  6. L

    [SOLVED] Ceph & cephfs_data Pools

    If I not get more space if destroy Ceph FS pools I think not need to destroy it. The output : pveceph pool ls pveceph pool ls...
  7. L

    [SOLVED] Ceph & cephfs_data Pools

    I not see the replication size of the vm.pool to 4?
  8. L

    [SOLVED] Ceph & cephfs_data Pools

    Hi everyone, In my case I have 7 pve node. osd_pool_default_min_size = 2 osd_pool_default_size = 3 ceph osd pool autoscale-status POOL SIZE TARGET SIZE RATE RAW CAPACITY RATIO TARGET RATIO EFFECTIVE RATIO BIAS PG_NUM NEW PG_NUM AUTOSCALE device_health_metrics...
  9. L

    missing replicate feature on volume 'local-lvm:vm-1001-disk-1' (500)

    Hello, You say the zfs-storage on the 3 nodes must have same name? I have the node1 , node2, node3 . For now I have iscsi.share1 with zsf1.iscsi over iscsi support with big storage. I try to add zfs-storage zpool1 on all of nodes but is not work. Can you suggest some other way ?