Recent content by hawk128

  1. H

    ceph balancer mode upmap-read

    And what about Ceph Version? I am on reef now but ceph features says Proxmox uses luminous as client? So, I can not try upmap-read...
  2. H

    ceph balancer mode upmap-read

    No, I was about usage OSD field from ceph osd list. Anyway. Ok. Thanks for your help. You just confirm that my clusters are too small, what I thought...
  3. H

    ceph balancer mode upmap-read

    Hmm. The issues is that sometimes I need to push more data on ceph temporary and some OSD reach 90-95% capacity while others are on 65-70%. Mostly it is due to the PG number on them. So, I trying to understand why it is +-1 in the docs but I have 10+% diff of PG numbers per OSD.
  4. H

    ceph balancer mode upmap-read

    Hi Alexskysilk, I understand about lab one. But I am expected better PG allocation. For example here is one of prod nodes. You can see that 4 OSDs are the same size but one has 46 PGs another 51. What is looks like 10% misallocation. Practically, they should be something like 49 PGs on each...
  5. H

    ceph balancer mode upmap-read

    Hi Aaron, Do you think it is worth to move .mgr to main rule? I do not use autoscale, it is ok for me. I set PG number malually. About size and min_size - yes, I know the risks and in our case it is reasonable. In the worth case I have 4 independed level of backups. So, I can restore whole...
  6. H

    ceph balancer mode upmap-read

    Hi Aaron, I have 2 clusters. This one is lab. This is is prod: Both are with autoscale off. Both with different number of OSDs and their sizes. Bot with more or less equal total size per host.
  7. H

    ceph balancer mode upmap-read

    I have two clusters. One prod - bigger with 30 OSDs. Another test one with 7 only. Both has balance issues. Here are for test one: ceph version ceph version 18.2.2 (e9fe820e7fffd1b7cde143a9f77653b73fcec748) reef (stable) ceph features { "mon": [ { "features"...
  8. H

    ceph balancer mode upmap-read

    Hi, I am not happy with current ceph balancer as there is too big difference in number of PG per OSDs. I would like to try upmap-read but all clients must be reef, however they are luminous. Why Proxmox use luminous clients for reef ceph? Can I change it to reef and activate upmap-read...
  9. H

    ERROR: VM 100 qmp command 'guest-fsfreeze-thaw' failed - got timeout

    Hi, Using IDE for Windows is bad idea. Chenge to SCSI or at least SATA. Should help....
  10. H

    ERROR: VM 100 qmp command 'guest-fsfreeze-thaw' failed - got timeout

    Hm. I do not use IO thread. But my error is slightly different.
  11. H

    ERROR: VM 100 qmp command 'guest-fsfreeze-thaw' failed - got timeout

    Yes. I am on Virtual Environment 6.1-8. qm unlock 100 helps when it is locked. In this case I can not connect or see anything via qm or console. It looks like bug in qemu-kvm...
  12. H

    ERROR: VM 100 qmp command 'guest-fsfreeze-thaw' failed - got timeout

    This happens often enough. Not only on cluster but on single hosts also. It is annoying to restart some Windows VM every morning after night backups... Any ideas?