I believe my PG count is fine. I ran the balancer in unmap mode and the capacity increased from 1.59TB to 1.8TB
It is still busy but al most done and it looks much better now:
root@node1:~# ceph osd df | grep ssd
0 ssd 0.45409 1.00000 465 GiB 229 GiB 228 GiB 4.1 MiB 1.3 GiB 236 GiB 49.23...
OK, so the GUI now agrees with the usage of POOLS in ceph df "66.50% (1.05 TiB of 1.59 TiB)", however, I still don't understand why the total capacity of the ceph-ssd pool is only 1.58TB. Is it not supposed to be 1.8TB, the same as in RAW STORAGE? (5.4 / 3 = 1.8)
I now have 12x 500GB (465GB...
You mean 99 + 67 != 200? After filling the image with zeros the image size reported back from RBD was maxed out at 200GB, then while running fstrim it came down in realtime to 139, it is now days later and still its in 139.
"pveversion -v"
root@node1:~# pveversion -v
proxmox-ve: 6.0-2 (running...
I used to be able to reclaim free space of VM's when I was still using ZFS, but with RBD images on Ceph it does not seem to work properly, I manage to reclaim some of the free space but not all. I have the same issue with VM's and Containers. I have discard enabled and the follow the following...
This happens to 2 of y 3 nodes in my cluster every couple of days. always the same 2 nodes and always at the same time.
Proxmox sends this email:
/etc/cron.daily/logrotate:
Job for pveproxy.service failed. See 'systemctl status pveproxy.service' and 'journalctl -xn' for details.
Job for...
This is not the case, I mostly use Standard Centos installations and ALL of them have always shutdown perfectly in all previous version of proxmox, except 4.0. En even if the problem was with the guest there should still be a timeout from the host side. Yet in 4.0 all just stop instantly without...
I just want to confirm that I am having the same problem on all my Proxmox 4.0 boxes. clean installs. Yes the older 3.4 installations works as expected. I am sure it is a software bug in 4.0
I think i have figured it out, first i run:
qemu-img snapshot -l vm-100-disk-3.qcow2
and the I run:
qemu-img snapshot -d 1 vm-100-disk-3.qcow2
qemu-img snapshot -d 2 vm-100-disk-3.qcow2
qemu-img snapshot -d 3 vm-100-disk-3.qcow2
qemu-img snapshot -d 4 vm-100-disk-3.qcow2
qemu-img snapshot -d...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.