Search results

  1. C

    Disable fs-freeze on snapshot backups

    FYI I've encountered the problem on PVE installations with both ceph and local disks, so I don't think it's a ceph specific issue. Maybe ceph exacerbates it more easily.
  2. C

    [SOLVED] Trying to understand ceph usable space

    You are correct, I was looking on the latest documentation which didn't mention about ceph balancer on I thought that it was enabled by manually creating a plan, but obviously I was wrong. So I run ceph balancer on and it immediatelly started moving around a few PGs and the usable space grew to...
  3. C

    [SOLVED] Trying to understand ceph usable space

    So, we brought pve02 back online, and changed the pg_num to 2048. After the rebalancing completed, we gained tons of usable storage! From 28.1TiB it went to 44.24TiB, which is a huge gain of 16.14TiB! But still, there are almost 10TiB that are not accounted for. ceph reports the following raw...
  4. C

    [SOLVED] Trying to understand ceph usable space

    This is ceph octopus (15.2.15). We haven't upgraded yet to a newer version but it's on our todo list. Ok, we will increase the pg_num first. Can we try that now (while pve02 is down) or should we wait until its fixed first? I also enabled the balancer but from what I read on the documentation...
  5. C

    [SOLVED] Trying to understand ceph usable space

    Would re-arranging the available disks per node make any difference in either easier calculations or actual usable space?
  6. C

    [SOLVED] Trying to understand ceph usable space

    Thanks for the hints. I've set the autoscaler mode to warn at the moment and the ratio to 1 root@pve01 ~ # pveceph pool ls --noborder Name Size Min Size PG Num min. PG Num Optimal PG Num PG Autoscale Mode PG Autoscale Target Size PG Autoscale Target Ratio Crush Rule Name...
  7. C

    [SOLVED] Trying to understand ceph usable space

    root@pve01 ~ # ceph osd df tree ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME -1 162.40396 - 131 TiB 53 TiB 53 TiB 1.0 GiB 149 GiB 77 TiB 40.71 1.00 - root default -3...
  8. C

    [SOLVED] Trying to understand ceph usable space

    Hello, We run a PVE cluster of 5 nodes with ceph on each node. Each node has a number of OSDs, each backed by SSDs of various sizes. A few months ago the OSDs / SSD drives per node were as follows: PVE1 4x 3.49TiB (3.84TB) 5x 1.75TiB (1.92TB) 3x 745GiB (800GB) PVE2 4x 3.49TiB (3.84TB)...
  9. C

    Disable fs-freeze on snapshot backups

    For me the solution has been to disable the qemu-agent. This allows to backup up the VMs with PBS without them getting "blocked".
  10. C

    VM crash during backup

    I haven't had a crash yet since the update.
  11. C

    VM crash during backup

    Great! I've installed pve-qemu-kvm=8.0.2-6 from the pvetest repo. I'll let you know if the issue has been resolved. Thanks!
  12. C

    VM crash during backup

    I'm in the same boat as matoa. I have a single VM that randomly crashes during backup. I hope the fix gets released soon. Thanks!
  13. C

    Strange slowness and micro interruptions (solved but want to share)

    Has anyone noticed any difference switching to "OS control mode" from "Static High Performance"? I am experiencing the same issue as abzsol when deleting snapshots or images on CEPH. The whole cluster slows down.
  14. C

    move osd to another node

    I would prefer an official guide from Proxmox. All the posts in the forum with "success stories" about moving OSDs are kind of anecdotal. Half people say it didn't work for them (me included) half say "Great it worked" with not so much as a detailed description of how exactly they made it work...
  15. C

    move osd to another node

    Can someone post a 100% working workflow for this. I tried today to move an OSD from one host to another, and it simply wouldn't get recognized by Ceph. "ceph-volume lvm activate --all" was supposedly successful but osd tree would not move the OSD from the old node to the new. # ceph-volume lvm...
  16. C

    Datacenter Summary - Storage Usage calculation algorithm

    No, I mean the total space gets more. See my quote in my first post. These copy/pastes are from the dashboard over a period of 24 hours. At one point for example it was: Storage 8.02 TiB of 12.10 TiB And then at another point later on: Storage 8.05 TiB of 12.13 TiB I get that the used...
  17. C

    Datacenter Summary - Storage Usage calculation algorithm

    Thank you but I've already done that. Still, this doesn't answer my question. Not which storages are included. But how those included storages are calculated.
  18. C

    Datacenter Summary - Storage Usage calculation algorithm

    Hello, How does Proxmox calculate the storage usage in the Datacenter Summary section? I am running 3 nodes, with 4x3.8TB SSDs per node used in a Ceph Cluster (3 replicas - standard/default ceph installation). I've configured the dashboard to only show the storage for ceph for a single node...
  19. C

    SSD Wear Level algorithm

    I will take a look at the source code to get a better understanding how exactly it works. Thank you for the information.
  20. C

    SSD Wear Level algorithm

    Also, while we are at it, when does proxmox consider the smart status "Not passed" ? I've got drives with reallocated sectors, offline uncorrectable and smart self test with LBA errors, and proxmox still shows this drive as SMART: PASSED

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!