Recent content by Gert

  1. G

    Understanding Ceph free space

    I see, you are right, After adding the extra 4 OSDs i must increase the PG count from 256 to 512
  2. G

    Understanding Ceph free space

    I believe my PG count is fine. I ran the balancer in unmap mode and the capacity increased from 1.59TB to 1.8TB It is still busy but al most done and it looks much better now: root@node1:~# ceph osd df | grep ssd 0 ssd 0.45409 1.00000 465 GiB 229 GiB 228 GiB 4.1 MiB 1.3 GiB 236 GiB 49.23...
  3. G

    Understanding Ceph free space

    I see, thank you for the clarification. root@node1:~# ceph osd df | grep ssd 0 ssd 0.45409 1.00000 465 GiB 243 GiB 242 GiB 4.1 MiB 1.3 GiB 222 GiB 52.22 1.44 68 up 1 ssd 0.45409 1.00000 465 GiB 191 GiB 189 GiB 3.2 MiB 1.1 GiB 274 GiB 40.99 1.13 53 up 2 ssd 0.45409...
  4. G

    Understanding Ceph free space

    OK, so the GUI now agrees with the usage of POOLS in ceph df "66.50% (1.05 TiB of 1.59 TiB)", however, I still don't understand why the total capacity of the ceph-ssd pool is only 1.58TB. Is it not supposed to be 1.8TB, the same as in RAW STORAGE? (5.4 / 3 = 1.8) I now have 12x 500GB (465GB...
  5. G

    Reclaiming free space in VMs and Containers with CEPH RBD images

    Ah, i see. That makes sense. So it should help if I defrag the file system first?
  6. G

    Reclaiming free space in VMs and Containers with CEPH RBD images

    You mean 99 + 67 != 200? After filling the image with zeros the image size reported back from RBD was maxed out at 200GB, then while running fstrim it came down in realtime to 139, it is now days later and still its in 139. "pveversion -v" root@node1:~# pveversion -v proxmox-ve: 6.0-2 (running...
  7. G

    Reclaiming free space in VMs and Containers with CEPH RBD images

    I used to be able to reclaim free space of VM's when I was still using ZFS, but with RBD images on Ceph it does not seem to work properly, I manage to reclaim some of the free space but not all. I have the same issue with VM's and Containers. I have discard enabled and the follow the following...
  8. G

    Understanding Ceph free space

    thank you. looking forward to the updated package.
  9. G

    Understanding Ceph free space

    I have a 3 node cluster using ceph with the following OSD configuration: node1: 3x 400GB SSD (465GB Usable) 2x 4TB HDD (3.64GB Usable) node2: 3x 500GB SSD (465GB Usable) 2x 4TB HDD (3.64GB Usable) node3: 3x 500GB SSD (465GB Usable) 2x 3TB HDD (2.73GB Usable) I have created 2 pools...
  10. G

    pveproxy become blocked state and cannot be killed

    This happens to 2 of y 3 nodes in my cluster every couple of days. always the same 2 nodes and always at the same time. Proxmox sends this email: /etc/cron.daily/logrotate: Job for pveproxy.service failed. See 'systemctl status pveproxy.service' and 'journalctl -xn' for details. Job for...
  11. G

    Proxmox 4.0 VE fresh install: can't shutdown VMs with host

    This is not the case, I mostly use Standard Centos installations and ALL of them have always shutdown perfectly in all previous version of proxmox, except 4.0. En even if the problem was with the guest there should still be a timeout from the host side. Yet in 4.0 all just stop instantly without...
  12. G

    Proxmox 4.0 VE fresh install: can't shutdown VMs with host

    I just want to confirm that I am having the same problem on all my Proxmox 4.0 boxes. clean installs. Yes the older 3.4 installations works as expected. I am sure it is a software bug in 4.0
  13. G

    Recovering VM with snapshots

    I think i have figured it out, first i run: qemu-img snapshot -l vm-100-disk-3.qcow2 and the I run: qemu-img snapshot -d 1 vm-100-disk-3.qcow2 qemu-img snapshot -d 2 vm-100-disk-3.qcow2 qemu-img snapshot -d 3 vm-100-disk-3.qcow2 qemu-img snapshot -d 4 vm-100-disk-3.qcow2 qemu-img snapshot -d...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!