Search results

  1. P

    [SOLVED] Anyone else having issues shutting down Turnkey Linux appliances from PVE host?

    Qemu-guest-agent is up and running - I can see the VM's IP address in the GUI. Inside the VM, journalctl shows me: qemu-ga[xxx]: info: guest-shutdown called, mode: (null)
  2. P

    Even no of PVE nodes?

    Okay, new try: Now, I have three nodes with three votes and need two for quorum. If I add a fourth server, I also add the quorum device. This would give me four nodes (the quorum device does not count here) with four votes out of which I need three for quorum. So when node no. 4 is offline...
  3. P

    Ceph node without OSDs?

    Thank you, noted - we are discussing this in a parallel thread at this very time (I just wanted to have two threads for two distinct questions)...
  4. P

    Even no of PVE nodes?

    And how would I implement that? Now, I have three nodes and two form a quorum. When I add the fourth server, I would also add the quorum device, right? Then this would give me five "nodes" out of which I need four for quorum. But node no. 4 will be offline most of the time. If one of the...
  5. P

    Even no of PVE nodes?

    Well, my cluster has three nodes and would, from time to time (when I turn on no. 4), have four nodes. And I am trying to find a solution that works both in the three node scenario as well as in the four node scenario. If it were possible to totally ignore node no. 4 when it is online, that...
  6. P

    [SOLVED] Anyone else having issues shutting down Turnkey Linux appliances from PVE host?

    I noticed recently that a few of my VMs won't shutdown (the request times out) when I give the signal on the PVE host (or when this is triggered by, say, a backup run). I am using qemu-guest-agent in all my VMs. Those VMs that don't shutdown seem to have in common that they are Turnkey Linux...
  7. P

    Even no of PVE nodes?

    I did run the third node for a while as a (full features) VM off the server on which my PBS resides before giving it its own server hardware. So this is an option. This VM was part both of the dedicated Corosync network and well as the dedicated Ceph network. I would like to avoid the need to...
  8. P

    Even no of PVE nodes?

    Makes sense. All nodes are connected via the same infrastructure and are located in the same room. (I would love to have a geographically distributed cluster but latency on the connection means available to me (i.e. end user DSL lines) is too large for Corosync, as I understand.) But I do...
  9. P

    Even no of PVE nodes?

    Good point: Yes, I do have shared storage (all three original nodes are also CEPH nodes with OSDs). The fourth node would probably be also a CEPH node (but without OSDs). In my idea, node no 4 would be completely ignored for all quorum purposes. Most of the time it would be offline anyway. So I...
  10. P

    Ceph node without OSDs?

    Hi, I have a three node home lab PVE cluster. Each node is also a CEPH node and has two OSDs (one being assigned to a "fast" pool for apps and one being assigned to a "slow" pool for data) (I know that's fewer than recommended and I am contemplating adding more OSDs but this is a home lab...)...
  11. P

    Even no of PVE nodes?

    Hi, I have a three node home lab cluster. The reason I set it up like this is that it is recommended to have an uneven number of nodes in order to avoid a split-brain situation when one of the nodes fails. After having used PVE for a while now, it is dawning on me that this is only relevant...
  12. P

    Deleted many GBs from datastore but only freed a few MBs!?!?!?!?!

    Thanks. That then leaves me with this problem: When I manually start the GC, it stops after about 7% complaining about not enough disk space. I'm assuming the same will happen, when GC runs automatically. Any idea how I can give PBS the disk space it needs to delete stuff? The datastore sits...
  13. P

    Deleted many GBs from datastore but only freed a few MBs!?!?!?!?!

    Hi, I have a PBS running that became full. Pruning didn't help because the garbage collection couldn't successfully complete due to unavailability of enough disk space. So I decided to manually remove backups that I no longer need (removed the entire backup for each obsolete VM, not just...
  14. P

    I still don't get Ceph...

    Thanks, I am aware of that and I guess I want Ceph to do what has to be done. That's okay for me. It is just that I had expected all the necessary rebalancing etc. to happen after I downed and outed the OSD. With that done, I expected the OSD to be gone in the eyes of Ceph. And that destroying...
  15. P

    I still don't get Ceph...

    Hmm, and do you know what PGs are relocated and why? Thanks!
  16. P

    I still don't get Ceph...

    Hi, I have this little three node PVE/Ceph cluster. And due to performance issues I got around to swap my Ceph OSD SSDs once again. I outed one OSD in one of the nodes and it Ceph started rebalancing/remapping/backfilling (as expected). After rebalancing/remaping/backfilling was done, I...
  17. P

    Cluster getting really ssssllllloooooowwwwww :-(((((((((((((((((((((((((

    Thanks, unfortunately, I don't know how to interpret the output: some avg10=0.00 avg60=0.00 avg300=0.00 total=19263172693 full avg10=0.00 avg60=0.00 avg300=0.00 total=19236635844 some avg10=0.00 avg60=0.00 avg300=0.00 total=2112331 full avg10=0.00 avg60=0.00 avg300=0.00 total=2110601 some...
  18. P

    Cluster getting really ssssllllloooooowwwwww :-(((((((((((((((((((((((((

    Dammit! Today I received my first pair of PM983 (u.2). The u.2 interface is completely new to me (had never heard of it before), but before I ordered I looked it up, obviously. I found that it is designed to work with SATA, SAS and PCI Express SSDs and one just needs a suitable controller. So...
  19. P

    Cluster getting really ssssllllloooooowwwwww :-(((((((((((((((((((((((((

    Okay, thanks. So I have ordered some of these SSDs to test. In the meantime, I have switched off all non-essantial VMs which brought down the iodelays substantially and rendered the remaining VMs usable again. But I still notice iodelay spikes in the GUI every now and then. Is it possible to...
  20. P

    Cluster getting really ssssllllloooooowwwwww :-(((((((((((((((((((((((((

    Okay, so is Samsung OEM Datacenter SSD PM893 / Enterprise SSD PM893 a suitable drive for Ceph? Will this restore my cluster to its old glory?