Hello
so I added some osd's knowing that page groups would have to be added.
ceph -s shows:
I tried to add page groups from pve gui by :
create new pool with these settings : size/min: 3/1 , pg_num 512
Then I figured to use move disks from the orig. pool to the new one.
the move did not work, the messages was:
Is that supposed to work?
PS: If there is a bug to report let me know, will do so.
so I added some osd's knowing that page groups would have to be added.
ceph -s shows:
Code:
# ceph -s
cluster 63efaa45-7507-428f-9443-82a0a546b70d
health HEALTH_WARN
too many PGs per OSD (369 > max 300)
pool ceph-kvm has many more objects per pg than average (too few pgs?)
monmap e5: 5 mons at {0=10.2.2.21:6789/0,1=10.2.2.10:6789/0,2=10.2.2.67:6789/0,3=10.2.2.6:6789/0,4=10.2.2.65:6789/0}
election epoch 94, quorum 0,1,2,3,4 3,1,0,4,2
osdmap e413: 9 osds: 9 up, 9 in
flags sortbitwise,require_jewel_osds
pgmap v1579531: 1152 pgs, 4 pools, 446 GB data, 112 kobjects
893 GB used, 3083 GB / 3977 GB avail
1152 active+clean
client io 32044 B/s wr, 0 op/s rd, 7 op/s wr
I tried to add page groups from pve gui by :
create new pool with these settings : size/min: 3/1 , pg_num 512
Then I figured to use move disks from the orig. pool to the new one.
the move did not work, the messages was:
Code:
create full clone of drive scsi0 (ceph-kvm:vm-118-disk-1)
TASK ERROR: storage migration failed: rbd error: rbd: couldn't connect to the cluster!
Is that supposed to work?
PS: If there is a bug to report let me know, will do so.