You cannot just change IPs in ceph.conf.
The first step is to add the new network to the Ceph public_neteork setting, then add new MONs with the new IPs to the cluster and after that remove the old MONs.
Only after that was successful the old...
There is no "migration" with only three nodes. the "3" in your crush rule refers to how many copies on individual nodes that have to exist in order to have a healthy pg (placement group.) the number of OSDs dont matter in this context- you can...
Yes, I am/was keen to get some of them too.
That's really a bummer :-(
I wanted to put two to four OSD in each of it, the actual constrains should allow for four. Now look at https://docs.ceph.com/en/mimic/start/hardware-recommendations/#ram ...
You can change the crush_rule for a pool. This will not cause issues for the VMs except a maybe slower performance during the time the cluster reorganizes the data.
The Cephfs storage should not use 10x what you're storing in it. I would look at it on disk and see what is actually being used.
host:/mnt/pve/cephfs# du -h
0 ./migrations
0 ./dump
8.2G ./template/iso
0 ./template/cache...
This is true for Windows Server. As far as I know, when using Samba, the only validated and recommended way is to use a different subnet. I see no advantage in not following the recommendation.
We have customers who do run 5-node full-mesh clusters, for example with 4x 25Gbit NICs.
Do not go for a ring topology as that could break in 2 places and then you have problems.
The Routed with Fallback method is what you want...
Yes, you can run a 5‑node Ceph cluster with just DAC cables (Max number of nodes recommended for this setup will be 3 nodes) —if you treat the SFP+ links as a routed L3 ring or mesh, not a Layer‑2 loop. Proxmox has a documented method for this...
Proxmox's Linux kernel (6.14) is based on Ubuntu instead of Debian and since drivers come with the kernel, maybe try an Ubuntu (installer without installing it) with the same kernel version (25.04).
EDIT: The user-space is indeed based on Debian...
You already found the answer. the fact you're moving the goalposts isnt helping you. I'd advise to get rid of your "wants"- the newer kernel is probably providing you with no utility at all. Given that the issues with your NIC are known and...
U = FOS
Lots of us here are in tech-support related positions, so forum support gets to seem like "more work" after a while.
A) Watch proxmox-related youtube videos
B) Read the last 30 days of forum posts, here and on Reddit (free education)...
the forum is community so it is a highlight that staff members are even present and answer questions patiently
what do you expect (seriously, literally)?
To be (much) clearer I was referencing 3 hosts and assuming multiple OSD on each, with at least one left running, not 3 hosts with only 1 OSD.
For the former, Ceph will use any other OSD on the same host (technically any unused host, but there...
Does it?
With the failure domain being "host" this does not make sense...? I am definitely NOT a Ceph expert, but now I am interested in the actual behavior:
I have a small, virtual Test-Cluster with Ceph. For the following tests three Nodes...