For the second network (the Ceph cluster network) you do not need an outside connection and in particular no default gateway (there can only be one).
Use the IP network you designated for this second network interface as Ceph cluster network in the configuration and Ceph should automatically...
Ceph does not have an independent location to place the third copy.
But inconsistent PGs mean there are copies that do not match.
Read in the Ceph documentation about troubleshooting PGs.
This is the smallest possible Ceph cluster without any room for parallelization.
Each OSD has to participate in every write request.
The Samsung 870 QVO is a QLC SSD with a small "TurboWrite" cache of only a few gigabytes.
As soon as this is full the write performance drops to around 160MB/s...
Das wird so nicht funktionieren. Die Switche sind doch auch noch miteinander verbunden, oder wird der Proxmox-Host dann der Core-Switch?
Auf die Art und Weise erzeugst Du eine Switch-Loop und legst das Netzwerk lahm.
Not online. You have to redeploy each OSD and that means data movement.
Usually this can be done on a host by host basis without losing too much redundancy.
You cannot have an admin on Pool level that is also an admin for the user management or the storage management. These two are global functionality and not restricted to pools.
You can restrict via permissions which storage entity a group of users can consume for the VMs in their pool.
Your cluster is not healthy.
You only have 2 of 3 HDDs online. There are way too much PGs on the HDD OSDs, they should be reduced to around 200.
Your "replicated_rule" does not use the device class and mixes SSDs and HDDs.
You have to fix these issues before you can talk about performance...
Are these Enterprise class SSDs or consumer grade SSDs?
The later tend to have a very small write cache and as soon as this is filled the write bandwidth drops below HDD speed.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.