They are stored on the root FS of the host they run on. /var/lib/ceph/ceph-mon/…
Ideally you have enough space available. If you don't need to `local-lvm` at all, you could think about removing the automatically created pve/data LV and expand the pve/root LV and the filesystem in /.
Don't...
As of now, there is no way to increase the timeout for the DHCP client in the installer. So your only chance right now is to figure out why the DHCP leases aren't provided quickly all the time. Is the DHCP server too slow? Does it have some other dependencies that it needs to query before it...
There will be no replication on shutdown. It will run on the schedule.
This is also something you need to think about if you want to use HA. How long can the potential data-loss be, that you are fine with. The shortest possible interval is each minute.
If that is a complete no-go, then look...
how is the cluster utilizing the resource during normal operation? E.g. network usage, CPU load on the hosts?
If it is already operating close to the limit, any additional rebalance or recovery action could push it into the limits.
It depends. If you restore by going through: Node -> VM -> Backups -> Restore
You will see that the VMID field is fixed. This will restore the current VM, erasing the current disk images (all of them).
If you go through: Node -> Storage -> Restore backup,
you will see that you can change the...
OSD 0 is still up, and since this is a 3-node cluster with 1 OSD, you should have all copies on that disk, as already mentioned. If you can, add new disks to the other hosts and create OSDs for them. Once Ceph has 2 copies/replicas, the pool(s) should be operational again. If you can't quickly...
Noch ein kleiner Hinweis bez. dem gängigen Missverständnis zwischen Lizenz und Subscription.
Proxmox VE, und unsere anderen Produkte, sind unter der AGPLv3 lizenziert, ohne Einschränkung auf die Funktionalität.
Wir verkaufen Subscriptions (Abos) welche Zugriff auf die Enterprise Repos und je...
I need to mention it, so people know not to mess around with it, or at least know the ramifications. I already fear the support ticket or forum thread where someone has corrupted guest disk images, and only after some extensive troubleshooting we realize that they disabled/disarmed the watchdog...
I really can only discourage from doing this! Make sure your Corosync is set up in a stable way with multiple links that are unlikely to completely fail at the same time.
Should only one node lose the corosync connection, it is expected that it will do a hard reset if it cannot reestablish it...
Once you use HA, having quorum at all times is important. In a two-node cluster, with one node down, the remaining node has 50% of the votes. This is not enough.
Either expand the cluster by another node or use the QDevice to have another vote without a full Proxmox VE installation.
@Azunai333 already explained it, but now in more detail:
The Crush map, which you can see on the right side in the Ceph->Configuration panel defines how Ceph sees the cluster topology. You have several buckets, which form the tree of the cluster.
You most likely have the default hierarchy of...
aus dem cluster ist er schon entfernt? https://pve.proxmox.com/pve-docs/pve-admin-guide.html#_remove_a_cluster_node
Dann kannst du versuchen, mal auf allen Nodes den pve-ha-lrm.service und pve-ha-crm.service neu starten.
Das ist alles OK. Wenn HA einmal aktiv war, zeigt er die Details an. Alle sind im "idle" Mode. So sollte es sein und wenn die Corosync Verbindung abbricht, sollten sich die Hosts nicht mehr fencen.
Host010 scheint nicht zu laufen oder Probleme mit Corosync zu haben.
Hmm, okay, so if you can achieve double the performance with 2 VMs in total, you can try to see if the VM configs can be improved upon.
One thing could be to switch from direct RBD to KRBD (host kernel connects to RBD instead of Qemu directly). To change this, edit the Storage in...
Nein, zumindest nicht geplant. Falls es dann passiert, (LRM ist idle), wird es andere Gründe haben die durchaus defekte Hardware als Grund haben könnten.
Wenn der LRM aktiv ist. Sobald keine HA Gäste mehr auf der Node sind, sollte der LRM nach 10min wieder in the "idle" Zustand wechseln. Dann startet die Node nicht mehr neu wenn die Clusterverbindung weg ist.
kannst du bitte mal folgendes probieren?
Stoppe den qmeventd:
systemctl stop qmeventd.service
und starte ihn dann im Vordergrund, um die gesamte Debug Log zu sehen:
qmeventd -f -v /var/run/qmeventd.sock
Warte dann circa 10 Sekunden. Du wirst sehen, dass sich die laufenden VMs melden.
Dann fahre...
Mit HA Gästen auf einer Node wird der LRM dieser Node in den "active" Status wechseln. Zu sehen in Datacenter -> HA oder mit ha-manager status. In diesem Zustand fenced sich eine Node, wenn die Verbindung zum Quorum länger als eine Minute nicht hergestellt werden kann. Dadurch wird...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.