Search results

  1. P

    Deleted many GBs from datastore but only freed a few MBs!?!?!?!?!

    Thanks. That then leaves me with this problem: When I manually start the GC, it stops after about 7% complaining about not enough disk space. I'm assuming the same will happen, when GC runs automatically. Any idea how I can give PBS the disk space it needs to delete stuff? The datastore sits...
  2. P

    Deleted many GBs from datastore but only freed a few MBs!?!?!?!?!

    Hi, I have a PBS running that became full. Pruning didn't help because the garbage collection couldn't successfully complete due to unavailability of enough disk space. So I decided to manually remove backups that I no longer need (removed the entire backup for each obsolete VM, not just...
  3. P

    I still don't get Ceph...

    Thanks, I am aware of that and I guess I want Ceph to do what has to be done. That's okay for me. It is just that I had expected all the necessary rebalancing etc. to happen after I downed and outed the OSD. With that done, I expected the OSD to be gone in the eyes of Ceph. And that destroying...
  4. P

    I still don't get Ceph...

    Hmm, and do you know what PGs are relocated and why? Thanks!
  5. P

    I still don't get Ceph...

    Hi, I have this little three node PVE/Ceph cluster. And due to performance issues I got around to swap my Ceph OSD SSDs once again. I outed one OSD in one of the nodes and it Ceph started rebalancing/remapping/backfilling (as expected). After rebalancing/remaping/backfilling was done, I...
  6. P

    Cluster getting really ssssllllloooooowwwwww :-(((((((((((((((((((((((((

    Thanks, unfortunately, I don't know how to interpret the output: some avg10=0.00 avg60=0.00 avg300=0.00 total=19263172693 full avg10=0.00 avg60=0.00 avg300=0.00 total=19236635844 some avg10=0.00 avg60=0.00 avg300=0.00 total=2112331 full avg10=0.00 avg60=0.00 avg300=0.00 total=2110601 some...
  7. P

    Cluster getting really ssssllllloooooowwwwww :-(((((((((((((((((((((((((

    Dammit! Today I received my first pair of PM983 (u.2). The u.2 interface is completely new to me (had never heard of it before), but before I ordered I looked it up, obviously. I found that it is designed to work with SATA, SAS and PCI Express SSDs and one just needs a suitable controller. So...
  8. P

    Cluster getting really ssssllllloooooowwwwww :-(((((((((((((((((((((((((

    Okay, thanks. So I have ordered some of these SSDs to test. In the meantime, I have switched off all non-essantial VMs which brought down the iodelays substantially and rendered the remaining VMs usable again. But I still notice iodelay spikes in the GUI every now and then. Is it possible to...
  9. P

    Cluster getting really ssssllllloooooowwwwww :-(((((((((((((((((((((((((

    Okay, so is Samsung OEM Datacenter SSD PM893 / Enterprise SSD PM893 a suitable drive for Ceph? Will this restore my cluster to its old glory?
  10. P

    Cluster getting really ssssllllloooooowwwwww :-(((((((((((((((((((((((((

    Yes, got me :) these are slow/standard (500MB/s) consumer grade SSDs (the NVMEs were consumer grade as well, albeit faster). Okay, but say I buy Samsung OEM Datacenter SSD PM893 / Enterprise SSD PM893 They are not (or only minimally) faster at 550 MB/s. So what kind of load does Ceph...
  11. P

    Cluster getting really ssssllllloooooowwwwww :-(((((((((((((((((((((((((

    Hi, I have s small 3 node PVE cluster including Ceph with 10GBE for each corosync and Ceph. I used to have one OSD (NVME) in each node. Everything was nice and fast. Then I replaced each NVME with two SSDs (as you are not supposed to have so few OSD and each OSD was already beyond the maximum...
  12. P

    Offtopic: Rootless docker storage driver for Debian 11/12 VM?

    No, I will probably update to Debian 12, if I find a way to replace the storage-driver. If not, I will create a new Deb12 VM and install Rootless Docker from scratch.
  13. P

    Offtopic: Rootless docker storage driver for Debian 11/12 VM?

    Allow me to highjack this thread as I have a similar problem. I have been using rootless docker in a dedicated Debian 11 VM for a while now. And I would like to switch from fuse-overlayfs to overlay2 but I can't find the place where to change the configuration. Everything I find only talks...
  14. P

    One or more devices could not be used because the label is missing or invalid

    I have the same problem as the OP. The solution proposed above seems to me to try to repair the faulted disk "in place" (i.e. without replacing; or replacing it with itself). So I am not replacing the disk and try to implement the suggested solution. But when I issue the command from above...
  15. P

    Why is my Ceph OSD dead (and won't start again)?????????????

    How about, for starters, everything since the last boot: -- Boot 41578566d8984f7789232b8d7aa546e9 -- May 11 16:17:05 node2 systemd[1]: Starting Ceph object storage daemon osd.4... May 11 16:17:05 node2 systemd[1]: Started Ceph object storage daemon osd.4. May 11 16:17:05 node2 ceph-osd[14881]...
  16. P

    Why is my Ceph OSD dead (and won't start again)?????????????

    I can do that, but the whole log has approx. 35k lines...
  17. P

    Why is my Ceph OSD dead (and won't start again)?????????????

    Sorry, I sometimes forget which commands I can enter on any host and which commands I need to enter on a specific host... So here it goes: May 11 16:17:05 node2 ceph-osd[14881]: 2023-05-11T16:17:05.142+0200 7fbc2bc1d240 -1 auth: unable to find a keyring on /var/lib/ceph/osd/ceph-4> May 11...
  18. P

    Why is my Ceph OSD dead (and won't start again)?????????????

    journalctl -u ceph-osd@4.service -- Journal begins at Thu 2022-11-24 13:49:04 CET, ends at Thu 2023-05-11 16:56:35 CEST. -- -- No entries -- Nothing (relating to this device).
  19. P

    Why is my Ceph OSD dead (and won't start again)?????????????

    Hi, I have a three node PVE cluster with Ceph installed and two pools between them, each pool with one disk (OSD) on each node. For some reason that I have'nt found yet, yesterday, one PVE (and Ceph) node crashed, rendering both pools degraded. After restarting the node, it came back online...
  20. P

    Replace PBS machine - sync datastore to new machine enough?

    Hi, I want to replace my existing PBS machine with a new one (including new disks). Is it enough to sync the datastore(s) of my existing PBS to the new machine (and, of course, set up the machine in the same way, i.e. the same users, same tape machine etc.) and then swap the new machine in to...