see https://pve.proxmox.com/wiki/Deploy_Hyper-Converged_Ceph_Cluster#pve_ceph_osds
"The WAL is placed with the DB, if not specified separately"
IIRC the default space usage is 10% (?) of the OSD size but you can adjust.
Side note: the SSD is a...
The rbd_data.* objects seem to be leftovers. As long as there are no rbd_id.* and rbd_header.* objects there are no RBD images in the pool any more.
The easiest way (and when you are really sure) would be to just delete the whole pool.
Ich hab es nicht erwartet und bereits einen Node mit 6.14 gebootet. Interessanterweise booten auch VMs mit "aio=io_uring" ohne augenscheinliche Fehler.
Ich teste ..
Wir haben jetzt eine Handvoll VMs dort laufen, vorsichtshalber mit dem...
You cannot just change IPs in ceph.conf.
The first step is to add the new network to the Ceph public_neteork setting, then add new MONs with the new IPs to the cluster and after that remove the old MONs.
Only after that was successful the old...
There is no "migration" with only three nodes. the "3" in your crush rule refers to how many copies on individual nodes that have to exist in order to have a healthy pg (placement group.) the number of OSDs dont matter in this context- you can...
Yes, I am/was keen to get some of them too.
That's really a bummer :-(
I wanted to put two to four OSD in each of it, the actual constrains should allow for four. Now look at https://docs.ceph.com/en/mimic/start/hardware-recommendations/#ram ...
You can change the crush_rule for a pool. This will not cause issues for the VMs except a maybe slower performance during the time the cluster reorganizes the data.
The Cephfs storage should not use 10x what you're storing in it. I would look at it on disk and see what is actually being used.
host:/mnt/pve/cephfs# du -h
0 ./migrations
0 ./dump
8.2G ./template/iso
0 ./template/cache...
This is true for Windows Server. As far as I know, when using Samba, the only validated and recommended way is to use a different subnet. I see no advantage in not following the recommendation.
We have customers who do run 5-node full-mesh clusters, for example with 4x 25Gbit NICs.
Do not go for a ring topology as that could break in 2 places and then you have problems.
The Routed with Fallback method is what you want...
Yes, you can run a 5‑node Ceph cluster with just DAC cables (Max number of nodes recommended for this setup will be 3 nodes) —if you treat the SFP+ links as a routed L3 ring or mesh, not a Layer‑2 loop. Proxmox has a documented method for this...