Hi, Im adding OSDs to my ceph clusterusing the command below, however it say that there are incompatible values which is not what is specified here https://docs.ceph.com/en/latest/rados/configuration/bluestore-config-ref/ . Am i understanding something wrong?
im using ceph version 18.2.4...
nothing particularly good its a KINGSTON SNVS/250GCN. Isn't the supercapacitor just for power cuts? I have UPSes on my server so thats not really a concern
I have a 16 X 1 TB ceph cluster over 4 nodes and my write latency is kinda slow. I already have the db and wal on a separate SSD, Separate networks for the front and backend. How do I get the write latency to less than 16 ms? Other than going to full SSD cluster ofc.
Ping results
Backend:
1...
I've decided to just nuke my current backups and re setup a new one. if anyone has any tops post it for the next person. I rather lose my current backups than to not be able to backup in the future
Hi guys, I did an oopsie. I set up a new PBS server and forgot to configure a GC and a retention policy(I have one on pve), now the disk is at 100% full how do I get the GC to run now that its giving me a disk full error.
TASK ERROR: update atime failed for chunk/file...
same here I have a home lab with 3 low power system, I would love to support proxmox but 315 is wayyy ( like add at least 10 Ys since i live in SEA) too much. It would be great to have a cheaper license with a limit to how many nodes we can use like how portainer does it.
edit: for context to...
I had a 3 node cluster where the command rados bench -p scbench 10 write --no-cleanup gets the following results
after adding another node however the results seem to have tanked.
I can obviously see the cur MB/s had a sharp drop, however Im not sure what is the issue and not even sure how...
Hi, I just did a hard disk swap and all the osds on a node is not able to start with the service
`systemctl start ceph-osd@0`
the output of systemctl status ceph-osd@0 is
ceph-osd@0.service - Ceph object storage daemon osd.0
Loaded: loaded (/lib/systemd/system/ceph-osd@.service...
My ceph cluster has 3 3TB and 3 1TB drives with SSD WAL and DB. The write speeds are kinda meh on my VMs, from what I understand the 3 TB drives will get 3x the write request of the 1TB drives. Is my understanding correct ? and would it be better if I swapped my 3TB with 1TB drives making it 2...
I run proxmox with low powered nodes in my home lab, Looking at the licences its by cpu does this mean that if i buy the 4cpu subscription i can run it on 4 hosts 1cpu each? or does that mean I can run it on 1 host with 4 cpu (if that even exists) and also can I mix nodes such as 1 2-cpu node...
I did an oopsie while upgrading my node and forgot to move my templates over from the old node. I already deleted the old node so i cant just use the UI
trying to copy the file over gets me this
root@pve2:~# cp /etc/pve/nodes/pve3/qemu-server/1000.conf...
as the title suggest, how do I change ceph's internal cluster network? I just added a faster NIC and can't figure out how to get the cluster to change network
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.