Search results

  1. H

    ceph balancer mode upmap-read

    And what about Ceph Version? I am on reef now but ceph features says Proxmox uses luminous as client? So, I can not try upmap-read...
  2. H

    ceph balancer mode upmap-read

    No, I was about usage OSD field from ceph osd list. Anyway. Ok. Thanks for your help. You just confirm that my clusters are too small, what I thought...
  3. H

    ceph balancer mode upmap-read

    Hmm. The issues is that sometimes I need to push more data on ceph temporary and some OSD reach 90-95% capacity while others are on 65-70%. Mostly it is due to the PG number on them. So, I trying to understand why it is +-1 in the docs but I have 10+% diff of PG numbers per OSD.
  4. H

    ceph balancer mode upmap-read

    Hi Alexskysilk, I understand about lab one. But I am expected better PG allocation. For example here is one of prod nodes. You can see that 4 OSDs are the same size but one has 46 PGs another 51. What is looks like 10% misallocation. Practically, they should be something like 49 PGs on each...
  5. H

    ceph balancer mode upmap-read

    Hi Aaron, Do you think it is worth to move .mgr to main rule? I do not use autoscale, it is ok for me. I set PG number malually. About size and min_size - yes, I know the risks and in our case it is reasonable. In the worth case I have 4 independed level of backups. So, I can restore whole...
  6. H

    ceph balancer mode upmap-read

    Hi Aaron, I have 2 clusters. This one is lab. This is is prod: Both are with autoscale off. Both with different number of OSDs and their sizes. Bot with more or less equal total size per host.
  7. H

    ceph balancer mode upmap-read

    I have two clusters. One prod - bigger with 30 OSDs. Another test one with 7 only. Both has balance issues. Here are for test one: ceph version ceph version 18.2.2 (e9fe820e7fffd1b7cde143a9f77653b73fcec748) reef (stable) ceph features { "mon": [ { "features"...
  8. H

    ceph balancer mode upmap-read

    Hi, I am not happy with current ceph balancer as there is too big difference in number of PG per OSDs. I would like to try upmap-read but all clients must be reef, however they are luminous. Why Proxmox use luminous clients for reef ceph? Can I change it to reef and activate upmap-read...
  9. H

    ERROR: VM 100 qmp command 'guest-fsfreeze-thaw' failed - got timeout

    Hi, Using IDE for Windows is bad idea. Chenge to SCSI or at least SATA. Should help....
  10. H

    ERROR: VM 100 qmp command 'guest-fsfreeze-thaw' failed - got timeout

    Hm. I do not use IO thread. But my error is slightly different.
  11. H

    ERROR: VM 100 qmp command 'guest-fsfreeze-thaw' failed - got timeout

    Yes. I am on Virtual Environment 6.1-8. qm unlock 100 helps when it is locked. In this case I can not connect or see anything via qm or console. It looks like bug in qemu-kvm...
  12. H

    ERROR: VM 100 qmp command 'guest-fsfreeze-thaw' failed - got timeout

    This happens often enough. Not only on cluster but on single hosts also. It is annoying to restart some Windows VM every morning after night backups... Any ideas?
  13. H

    ERROR: VM 100 qmp command 'guest-fsfreeze-thaw' failed - got timeout

    Hi all, I upgraded some clusters a week ago from test repository. After that I have regularr stuck of some different Windows guests. INFO: Starting Backup of VM 100 (qemu) INFO: Backup started at 2020-04-05 04:00:02 INFO: status = running INFO: VM Name: sexp-win10 INFO: include disk 'scsi0'...
  14. H

    Spice from Ubuntu with high latency.

    Hi all, I have some Proxmox clusters and have found issues with connection via spice to a few of them. After investigation, it looks like Windows spice client works well. Ubuntu spice client works only with cluster latency less than 200ms. It is 600-800 ms to a few clusters from me. Is it...
  15. H

    Proxmox VE 5.3 released!

    HI, It would be nice to add to GUI network section the tick for enabling STP on bridge interfaces. I use it often enough via cli.
  16. H

    Proxmox VE 5.3 released!

    One more issue. I am going to use cephFS storage for backups but there is no Maxfiles option from GUI. I think it should be there...
  17. H

    Proxmox VE 5.3 released!

    Found a small issue with cephFS. I added [mon] mon_allow_pool_delete = true into /etc/pve/ceph.conf earlier. Now I tried to add cephFS storage for test. It creates it, mounted it to Debian FS but could not mount it into Proxmox (? in GUI status on this storage). The solution is...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!