Did you tried to use ceph+ganesha manual deployment from Debian to test on proxmox ceph in your lab ? This option is Debian native and does not need to be deployed from proxmox nodes if you set ceph client + auth inside some PVE VM, from proxmox...
I think this is because we get ldap values already utf-8 encoded, but treat them as if they are not. sent a patch[0].
[0] https://lore.proxmox.com/pve-devel/20260417110451.134766-1-h.laimer@proxmox.com/T/#u
Just be sure you do NOT mix other traffic along with these, most especially corosync. if you have more then 4 interfaces keep the other forms of traffic on different interfaces. If you dont- consider only using two interfaces for ceph and two...
The issue is something that I didn't expect and I'm willing to mark this as solved. Two of the three nodes had maxed out primary hdds (not the pool) because of a backup job. I didn't notice it until now and didn't expect something else to be...
The important point is that in a VXLAN zone, the gateway field in the subnet settings does not automatically create a real router for that network.
It only defines addressing information for the subnet.
It does not make Proxmox spawn a gateway...
VXLAN is a layer2 network and as such does not provide any routing functionality - the gateway setting has no effect. You'd need to add a second network device for connecting to the internet or add a gateway VM/CT to the VXLAN that has internet...
As others have mentioned, it is technically possible to set size=2 and min_size=1, but it does come with risks.
From a Proxmox and Ceph perspective, this configuration is not recommended. While it may not be explicitly unsupported, the fact that...
I gathered some experience a year ago: https://forum.proxmox.com/threads/fabu-can-i-use-ceph-in-a-_very_-small-cluster.159671/
If I want a stable system I would never start with less than three copies and min=2.
Hi,
you can just use apt to install the .deb package. Navigate to the directory where the .deb file is located and run the following command:
sudo apt install ./<your-file>
There are several calculators online such as https://florian.ca/ceph-calculator/.
With 3 nodes and 1 off there is nowhere to replicate so yes you’d have 2 copies. If another OSD is then lost those PG would have only 1 copy and I/O would stop...
Each of the 3 Nodes has a full copy of all data.
So your real usable space is somewhere below 27TB - you already mentioned the %-values. Probably a good idea to play it safe and stay under 70% in case a sudden spike happens, like unplanned data...