No, I was about usage OSD field from ceph osd list.
Anyway. Ok. Thanks for your help.
You just confirm that my clusters are too small, what I thought...
Hmm.
The issues is that sometimes I need to push more data on ceph temporary and some OSD reach 90-95% capacity while others are on 65-70%.
Mostly it is due to the PG number on them.
So, I trying to understand why it is +-1 in the docs but I have 10+% diff of PG numbers per OSD.
Hi Alexskysilk,
I understand about lab one.
But I am expected better PG allocation.
For example here is one of prod nodes. You can see that 4 OSDs are the same size but one has 46 PGs another 51. What is looks like 10% misallocation.
Practically, they should be something like 49 PGs on each...
Hi Aaron,
Do you think it is worth to move .mgr to main rule?
I do not use autoscale, it is ok for me. I set PG number malually.
About size and min_size - yes, I know the risks and in our case it is reasonable.
In the worth case I have 4 independed level of backups. So, I can restore whole...
Hi Aaron,
I have 2 clusters. This one is lab.
This is is prod:
Both are with autoscale off.
Both with different number of OSDs and their sizes. Bot with more or less equal total size per host.
I have two clusters.
One prod - bigger with 30 OSDs.
Another test one with 7 only.
Both has balance issues.
Here are for test one:
ceph version
ceph version 18.2.2 (e9fe820e7fffd1b7cde143a9f77653b73fcec748) reef (stable)
ceph features
{
"mon": [
{
"features"...
Hi,
I am not happy with current ceph balancer as there is too big difference in number of PG per OSDs.
I would like to try upmap-read but all clients must be reef, however they are luminous.
Why Proxmox use luminous clients for reef ceph? Can I change it to reef and activate upmap-read...
Yes. I am on Virtual Environment 6.1-8.
qm unlock 100 helps when it is locked.
In this case I can not connect or see anything via qm or console.
It looks like bug in qemu-kvm...
This happens often enough.
Not only on cluster but on single hosts also.
It is annoying to restart some Windows VM every morning after night backups...
Any ideas?
Hi all,
I upgraded some clusters a week ago from test repository.
After that I have regularr stuck of some different Windows guests.
INFO: Starting Backup of VM 100 (qemu)
INFO: Backup started at 2020-04-05 04:00:02
INFO: status = running
INFO: VM Name: sexp-win10
INFO: include disk 'scsi0'...
Hi all,
I have some Proxmox clusters and have found issues with connection via spice to a few of them.
After investigation, it looks like Windows spice client works well. Ubuntu spice client works only with cluster latency less than 200ms.
It is 600-800 ms to a few clusters from me. Is it...
Found a small issue with cephFS.
I added
[mon]
mon_allow_pool_delete = true
into /etc/pve/ceph.conf earlier.
Now I tried to add cephFS storage for test. It creates it, mounted it to Debian FS but could not mount it into Proxmox (? in GUI status on this storage).
The solution is...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.