Check the OSD ratios. Is one OSD holding more data compared to the others? This can be seen in the Ceph > OSD page, Used (%).
If one of the OSDs is holding a higher percentage compared to others, you can manually set the reweigh via the command line to a lower value to help force a...
Old thread, but great advice. Thank you @aaron. This was my issue also. Could this be added to the Proxmox Wiki? I searched the Ceph documentation and didn't see mention of this issue.
I would only recommend having multiple pools if the duplication is different or if the drive type is different, e.g. HDD & SSD. If you follow this path, you must set up different rules with different classes for drive type. Again, this is only needed if you have different drive types. If all...
Look into erasure coding: https://docs.ceph.com/docs/master/rados/operations/erasure-code/. I use it on my cephFS implementation on a 3 node, 4 OSDs per node cluster. It works well. I would not use erasure coding for VM storage.
I use ceph bluestore OSDs... if you do also, review this reference doc: http://docs.ceph.com/docs/mimic/rados/configuration/bluestore-config-ref/
TLDR: set the maximum memory usage per each OSD in ceph.conf. I have done this without any issues.
[global]
other settings...
Good news... I read up on the Ceph docs and found the following:
A pool can then be changed to use the new rule with:
ceph osd pool set <pool-name> crush_rule <rule-name>
AKA - Pools can have NEW rule sets applied!
http://docs.ceph.com/docs/mimic/rados/operations/crush-map/
Please clarify I understand your problem correctly: the 'general' pool is storing information on the SSDs?
This information is dependent upon my understanding above being correct. Just create a new temporary pool and empty everything off of the current problematic pool. Then delete the...
Ceph does use ALL OSDs for any pool that does not have a drive type limitation. Your theory is likely valid.
Also, FYI the Total column is the amount of storage data being used. Not the total availability of the pool.
I am still experiencing this error... must manually restart networking.service upon reboot for OVS network to start up. Does anyone know anything about this bug?
Thanks!
Based upon my knowledge, however limited, if using ceph or glusterFS, you can definitely set up HA as long as you have duplicates of the data, e.g. set ceph duplication level to 3. Also, I would suspect you would be able to do ceph on a 3 node cluster with 1 SSD journal device (I would...
I would definitely recommend considering ceph. However, I would definitely want to have a subscription for the customer d/t the upcoming changes in Proxmox with the recent Hammer to Jewel upgrade. Also, "pveceph install does not run correctly due to changes on the ceph end. It is easy to...
I am attempting to install ceph version Hammer via pveceph:
pveceph install -version hammer
However, this is failing on me with the following output:
root@node1:~# pveceph install -version hammer
download and import ceph repository keys
unable to download ceph release key: 500 Can't connect to...
As a follow-up question to the above question... which interface do disk migrations occur on? From my observations, I believe it to be either the default vmbr0 or the corosync network (if segmented off of vmbr0)... Same question for NFS mounts, which network interface is responsible for...
Is it possible to select the network in which live migration occurs? Is it the default vmbr0? Or is it the same network at corosync - in which case I can change the network via this wiki guide: https://pve.proxmox.com/wiki/Separate_Cluster_Network. Or is it completely random?
I am unable to...
I would recommend you forgo the need for a 10GB switch via directly linking the machines if your cluster is only 3 nodes. You would need 3 Intel x520-DA2's which can be had on eBay for ~$100 per device. This would enable you to set up a static ceph network without a switch and have the...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.