Search results

  1. P

    Even no of PVE nodes?

    Well, my cluster has three nodes and would, from time to time (when I turn on no. 4), have four nodes. And I am trying to find a solution that works both in the three node scenario as well as in the four node scenario. If it were possible to totally ignore node no. 4 when it is online, that...
  2. P

    [SOLVED] Anyone else having issues shutting down Turnkey Linux appliances from PVE host?

    I noticed recently that a few of my VMs won't shutdown (the request times out) when I give the signal on the PVE host (or when this is triggered by, say, a backup run). I am using qemu-guest-agent in all my VMs. Those VMs that don't shutdown seem to have in common that they are Turnkey Linux...
  3. P

    Even no of PVE nodes?

    I did run the third node for a while as a (full features) VM off the server on which my PBS resides before giving it its own server hardware. So this is an option. This VM was part both of the dedicated Corosync network and well as the dedicated Ceph network. I would like to avoid the need to...
  4. P

    Even no of PVE nodes?

    Makes sense. All nodes are connected via the same infrastructure and are located in the same room. (I would love to have a geographically distributed cluster but latency on the connection means available to me (i.e. end user DSL lines) is too large for Corosync, as I understand.) But I do...
  5. P

    Even no of PVE nodes?

    Good point: Yes, I do have shared storage (all three original nodes are also CEPH nodes with OSDs). The fourth node would probably be also a CEPH node (but without OSDs). In my idea, node no 4 would be completely ignored for all quorum purposes. Most of the time it would be offline anyway. So I...
  6. P

    Ceph node without OSDs?

    Hi, I have a three node home lab PVE cluster. Each node is also a CEPH node and has two OSDs (one being assigned to a "fast" pool for apps and one being assigned to a "slow" pool for data) (I know that's fewer than recommended and I am contemplating adding more OSDs but this is a home lab...)...
  7. P

    Even no of PVE nodes?

    Hi, I have a three node home lab cluster. The reason I set it up like this is that it is recommended to have an uneven number of nodes in order to avoid a split-brain situation when one of the nodes fails. After having used PVE for a while now, it is dawning on me that this is only relevant...
  8. P

    Deleted many GBs from datastore but only freed a few MBs!?!?!?!?!

    Thanks. That then leaves me with this problem: When I manually start the GC, it stops after about 7% complaining about not enough disk space. I'm assuming the same will happen, when GC runs automatically. Any idea how I can give PBS the disk space it needs to delete stuff? The datastore sits...
  9. P

    Deleted many GBs from datastore but only freed a few MBs!?!?!?!?!

    Hi, I have a PBS running that became full. Pruning didn't help because the garbage collection couldn't successfully complete due to unavailability of enough disk space. So I decided to manually remove backups that I no longer need (removed the entire backup for each obsolete VM, not just...
  10. P

    I still don't get Ceph...

    Thanks, I am aware of that and I guess I want Ceph to do what has to be done. That's okay for me. It is just that I had expected all the necessary rebalancing etc. to happen after I downed and outed the OSD. With that done, I expected the OSD to be gone in the eyes of Ceph. And that destroying...
  11. P

    I still don't get Ceph...

    Hmm, and do you know what PGs are relocated and why? Thanks!
  12. P

    I still don't get Ceph...

    Hi, I have this little three node PVE/Ceph cluster. And due to performance issues I got around to swap my Ceph OSD SSDs once again. I outed one OSD in one of the nodes and it Ceph started rebalancing/remapping/backfilling (as expected). After rebalancing/remaping/backfilling was done, I...
  13. P

    Cluster getting really ssssllllloooooowwwwww :-(((((((((((((((((((((((((

    Thanks, unfortunately, I don't know how to interpret the output: some avg10=0.00 avg60=0.00 avg300=0.00 total=19263172693 full avg10=0.00 avg60=0.00 avg300=0.00 total=19236635844 some avg10=0.00 avg60=0.00 avg300=0.00 total=2112331 full avg10=0.00 avg60=0.00 avg300=0.00 total=2110601 some...
  14. P

    Cluster getting really ssssllllloooooowwwwww :-(((((((((((((((((((((((((

    Dammit! Today I received my first pair of PM983 (u.2). The u.2 interface is completely new to me (had never heard of it before), but before I ordered I looked it up, obviously. I found that it is designed to work with SATA, SAS and PCI Express SSDs and one just needs a suitable controller. So...
  15. P

    Cluster getting really ssssllllloooooowwwwww :-(((((((((((((((((((((((((

    Okay, thanks. So I have ordered some of these SSDs to test. In the meantime, I have switched off all non-essantial VMs which brought down the iodelays substantially and rendered the remaining VMs usable again. But I still notice iodelay spikes in the GUI every now and then. Is it possible to...
  16. P

    Cluster getting really ssssllllloooooowwwwww :-(((((((((((((((((((((((((

    Okay, so is Samsung OEM Datacenter SSD PM893 / Enterprise SSD PM893 a suitable drive for Ceph? Will this restore my cluster to its old glory?
  17. P

    Cluster getting really ssssllllloooooowwwwww :-(((((((((((((((((((((((((

    Yes, got me :) these are slow/standard (500MB/s) consumer grade SSDs (the NVMEs were consumer grade as well, albeit faster). Okay, but say I buy Samsung OEM Datacenter SSD PM893 / Enterprise SSD PM893 They are not (or only minimally) faster at 550 MB/s. So what kind of load does Ceph...
  18. P

    Cluster getting really ssssllllloooooowwwwww :-(((((((((((((((((((((((((

    Hi, I have s small 3 node PVE cluster including Ceph with 10GBE for each corosync and Ceph. I used to have one OSD (NVME) in each node. Everything was nice and fast. Then I replaced each NVME with two SSDs (as you are not supposed to have so few OSD and each OSD was already beyond the maximum...
  19. P

    Offtopic: Rootless docker storage driver for Debian 11/12 VM?

    No, I will probably update to Debian 12, if I find a way to replace the storage-driver. If not, I will create a new Deb12 VM and install Rootless Docker from scratch.
  20. P

    Offtopic: Rootless docker storage driver for Debian 11/12 VM?

    Allow me to highjack this thread as I have a similar problem. I have been using rootless docker in a dedicated Debian 11 VM for a while now. And I would like to switch from fuse-overlayfs to overlay2 but I can't find the place where to change the configuration. Everything I find only talks...