Search results

  1. P

    Ceph: Balancing disk space unequally!?!?!?!

    Hi, I have a three node PVE cluster in which each node is also a Ceph node. Each Ceph node used to have one identical HDD and the pool was getting full. Therefore, and because one is supposed to have more OSDs anyway, I added one additional identical HDD OSD per node. Ceph rebalanced between...
  2. P

    [SOLVED] Anyone else having issues shutting down Turnkey Linux appliances from PVE host?

    Hi Jeremy, Sorry, it would have made sense to report this to the TKL forums as well after resolving it. I shall try and keep that in mind, should I ever have another TKL related issue. Great to hear that dbus will be included in v18. Cheers
  3. P

    [SOLVED] Anyone else having issues shutting down Turnkey Linux appliances from PVE host?

    Happy to confirm that installing dbus on two of my TKL appliances did the trick (I did not also install systemd, that seems to be installed already). The only other comment I have is that it did not work right after the installation of dbus, I needed to reboot the VM once. Thanks for your...
  4. P

    Even no of PVE nodes?

    Yeah, I think you are right. But then it doesn't work the way it needs to. I want one of the three 24/7 nodes to be allowed to fail (or go down for maintenance) and the cluster continue to work. Maybe UdoB's idea could be the solution. Have to think that through.
  5. P

    Even no of PVE nodes?

    Good point. So my plan was to have the fourth node join CEPH but not host any OSDs. If I also don't set ip up as Monitor or Manager? Would that keep it neutral in the quorum count?
  6. P

    Even no of PVE nodes?

    One more try: Now, I have three nodes with three votes and need two for quorum. If I add a fourth server, I also add the quorum device. This would give me five "nodes" (the quorum device counts) with five votes out of which I need three for quorum. So when node no. 4 is offline (which is most...
  7. P

    [SOLVED] Anyone else having issues shutting down Turnkey Linux appliances from PVE host?

    pveversion -v proxmox-ve: 7.4-1 (running kernel: 5.15.108-1-pve) pve-manager: 7.4-16 (running version: 7.4-16/0f39f621) pve-kernel-5.15: 7.4-4 pve-kernel-5.15.108-1-pve: 5.15.108-1 pve-kernel-5.15.107-2-pve: 5.15.107-2 pve-kernel-5.15.107-1-pve: 5.15.107-1 pve-kernel-5.15.102-1-pve: 5.15.102-1...
  8. P

    [SOLVED] Anyone else having issues shutting down Turnkey Linux appliances from PVE host?

    No, nothing further. When I ping the guest agent, I don't get any response (no time out or anything). (I can ping the VM with a normal ping, though).
  9. P

    [SOLVED] Anyone else having issues shutting down Turnkey Linux appliances from PVE host?

    Qemu-guest-agent is up and running - I can see the VM's IP address in the GUI. Inside the VM, journalctl shows me: qemu-ga[xxx]: info: guest-shutdown called, mode: (null)
  10. P

    Even no of PVE nodes?

    Okay, new try: Now, I have three nodes with three votes and need two for quorum. If I add a fourth server, I also add the quorum device. This would give me four nodes (the quorum device does not count here) with four votes out of which I need three for quorum. So when node no. 4 is offline...
  11. P

    Ceph node without OSDs?

    Thank you, noted - we are discussing this in a parallel thread at this very time (I just wanted to have two threads for two distinct questions)...
  12. P

    Even no of PVE nodes?

    And how would I implement that? Now, I have three nodes and two form a quorum. When I add the fourth server, I would also add the quorum device, right? Then this would give me five "nodes" out of which I need four for quorum. But node no. 4 will be offline most of the time. If one of the...
  13. P

    Even no of PVE nodes?

    Well, my cluster has three nodes and would, from time to time (when I turn on no. 4), have four nodes. And I am trying to find a solution that works both in the three node scenario as well as in the four node scenario. If it were possible to totally ignore node no. 4 when it is online, that...
  14. P

    [SOLVED] Anyone else having issues shutting down Turnkey Linux appliances from PVE host?

    I noticed recently that a few of my VMs won't shutdown (the request times out) when I give the signal on the PVE host (or when this is triggered by, say, a backup run). I am using qemu-guest-agent in all my VMs. Those VMs that don't shutdown seem to have in common that they are Turnkey Linux...
  15. P

    Even no of PVE nodes?

    I did run the third node for a while as a (full features) VM off the server on which my PBS resides before giving it its own server hardware. So this is an option. This VM was part both of the dedicated Corosync network and well as the dedicated Ceph network. I would like to avoid the need to...
  16. P

    Even no of PVE nodes?

    Makes sense. All nodes are connected via the same infrastructure and are located in the same room. (I would love to have a geographically distributed cluster but latency on the connection means available to me (i.e. end user DSL lines) is too large for Corosync, as I understand.) But I do...
  17. P

    Even no of PVE nodes?

    Good point: Yes, I do have shared storage (all three original nodes are also CEPH nodes with OSDs). The fourth node would probably be also a CEPH node (but without OSDs). In my idea, node no 4 would be completely ignored for all quorum purposes. Most of the time it would be offline anyway. So I...
  18. P

    Ceph node without OSDs?

    Hi, I have a three node home lab PVE cluster. Each node is also a CEPH node and has two OSDs (one being assigned to a "fast" pool for apps and one being assigned to a "slow" pool for data) (I know that's fewer than recommended and I am contemplating adding more OSDs but this is a home lab...)...
  19. P

    Even no of PVE nodes?

    Hi, I have a three node home lab cluster. The reason I set it up like this is that it is recommended to have an uneven number of nodes in order to avoid a split-brain situation when one of the nodes fails. After having used PVE for a while now, it is dawning on me that this is only relevant...