Currently now one machine is down. And ALL VMs are freezed.... I have no idea. How I can figure out where the problem is?
Here is the output of pveceph status
{
"health" : {
"overall_status" : "HEALTH_WARN",
"summary" : [
{
"severity" : "HEALTH_WARN"...
Found an interesting problem:
rbd: ceph-storage
monhost 10.2.19.11;10.2.19.12;10.2.19.13
content images
krbd 0
pool rbd
This is my storage.conf configuration for the Ceph Storage. But I configured all 4 nodes (10.2.19.14 too) as a monitor in Ceph:
I will try to...
Sorry! Meinetwegen kann dieser Post sonst gelöscht werden....
Zur Frage von Dietmar: 4 Maschinen mit jeweils 1 Monitor. Und 3 Monitore waren aktiv zu dem Zeitpunkt.
Why is this asking for trouble? Sure, it could be very slowly. But normally it should be okay...?
I used min_size 1 because I have replica-size 3. And with this configuration it could be possible, that the VM100 is on both OSDs on proxmox4 and on one OSD of proxmox3. But with min_size 2 the VM...
Hello everyone,
I have a problem with my Proxmox Ceph Cluster:
There are 4 machines in a Proxmox 4.4-87 Cluster. All of this machines have 2 CEPH OSDs. So in summary we have 8 OSDs.
Ceph Pool config is like this:
ceph osd dump | grep -i rbd
pool 5 'rbd' replicated size 3 min_size 1...
Hallo zusammen,
folgendes Szenario:
4 Server mit Proxmox 4.4-87. Die Server haben jeweils 2 CEPH OSDs.
Die CEPH Pool Config sieht im Grunde so aus:
ceph osd dump | grep -i rbd
pool 5 'rbd' replicated size 3 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 256 pgp_num 256 last_change...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.