Search results

  1. P

    Cluster getting really ssssllllloooooowwwwww :-(((((((((((((((((((((((((

    Yes, got me :) these are slow/standard (500MB/s) consumer grade SSDs (the NVMEs were consumer grade as well, albeit faster). Okay, but say I buy Samsung OEM Datacenter SSD PM893 / Enterprise SSD PM893 They are not (or only minimally) faster at 550 MB/s. So what kind of load does Ceph...
  2. P

    Cluster getting really ssssllllloooooowwwwww :-(((((((((((((((((((((((((

    Hi, I have s small 3 node PVE cluster including Ceph with 10GBE for each corosync and Ceph. I used to have one OSD (NVME) in each node. Everything was nice and fast. Then I replaced each NVME with two SSDs (as you are not supposed to have so few OSD and each OSD was already beyond the maximum...
  3. P

    Offtopic: Rootless docker storage driver for Debian 11/12 VM?

    No, I will probably update to Debian 12, if I find a way to replace the storage-driver. If not, I will create a new Deb12 VM and install Rootless Docker from scratch.
  4. P

    Offtopic: Rootless docker storage driver for Debian 11/12 VM?

    Allow me to highjack this thread as I have a similar problem. I have been using rootless docker in a dedicated Debian 11 VM for a while now. And I would like to switch from fuse-overlayfs to overlay2 but I can't find the place where to change the configuration. Everything I find only talks...
  5. P

    One or more devices could not be used because the label is missing or invalid

    I have the same problem as the OP. The solution proposed above seems to me to try to repair the faulted disk "in place" (i.e. without replacing; or replacing it with itself). So I am not replacing the disk and try to implement the suggested solution. But when I issue the command from above...
  6. P

    Why is my Ceph OSD dead (and won't start again)?????????????

    How about, for starters, everything since the last boot: -- Boot 41578566d8984f7789232b8d7aa546e9 -- May 11 16:17:05 node2 systemd[1]: Starting Ceph object storage daemon osd.4... May 11 16:17:05 node2 systemd[1]: Started Ceph object storage daemon osd.4. May 11 16:17:05 node2 ceph-osd[14881]...
  7. P

    Why is my Ceph OSD dead (and won't start again)?????????????

    I can do that, but the whole log has approx. 35k lines...
  8. P

    Why is my Ceph OSD dead (and won't start again)?????????????

    Sorry, I sometimes forget which commands I can enter on any host and which commands I need to enter on a specific host... So here it goes: May 11 16:17:05 node2 ceph-osd[14881]: 2023-05-11T16:17:05.142+0200 7fbc2bc1d240 -1 auth: unable to find a keyring on /var/lib/ceph/osd/ceph-4> May 11...
  9. P

    Why is my Ceph OSD dead (and won't start again)?????????????

    journalctl -u ceph-osd@4.service -- Journal begins at Thu 2022-11-24 13:49:04 CET, ends at Thu 2023-05-11 16:56:35 CEST. -- -- No entries -- Nothing (relating to this device).
  10. P

    Why is my Ceph OSD dead (and won't start again)?????????????

    Hi, I have a three node PVE cluster with Ceph installed and two pools between them, each pool with one disk (OSD) on each node. For some reason that I have'nt found yet, yesterday, one PVE (and Ceph) node crashed, rendering both pools degraded. After restarting the node, it came back online...
  11. P

    Replace PBS machine - sync datastore to new machine enough?

    Hi, I want to replace my existing PBS machine with a new one (including new disks). Is it enough to sync the datastore(s) of my existing PBS to the new machine (and, of course, set up the machine in the same way, i.e. the same users, same tape machine etc.) and then swap the new machine in to...
  12. P

    Possible bug after upgrading to 7.2: VM freeze if backing up large disks

    Are there any news on the freeze issue? This keeps tripping me up: I thought I had found an acceptable compromise by doing only one nightly backup after stopping the VM (and relying on the Ceph cluster to preserve the VM data in the meantime). But I realized that the backups now fail due to the...
  13. P

    Where is my zpool storage???

    Thank you. So if I understand correctly, I don't have to think about zvol block size, right? And ashift? Is that something I need to consider when setting up the pool or anything else PBS might need to store data?
  14. P

    Where is my zpool storage???

    Thanks, Dunuin, for the detailed explanation. I am still at the beginning of my zfs voyage. I am about to set up a new PBS machine (to replace my old one). My (new, same as the old) PBS machine has only space for 5 3.5" hdds and with raidz2 this would have resulted in massive overhead combined...
  15. P

    Help for Ceph Scrub / Performance

    So, um, is there a way to predict how long scrubbing is going to take? I am sitting here again, waiting for scrubbing to complete in order to replace a disk (not due to failure but to increase capacity)...
  16. P

    Where is my zpool storage???

    Hi, I have a zfs pool on my PVE host made from 5 3TB drives in a raidz1 pool. So that should give me roundabout 15TB - 3TB = 12 TB capacity. When I do "zpool list" ist says that it is 13.6T which is close enough (for what I am talking about here). My issue is as follows: I have passed...
  17. P

    Can't pass CDROM drive to VM!?!?!?!?!

    Thanks, I will try. Currently, my solution is to use a PCIe USB card that I pass to the VM and a USB DVD drive attached to the card. That works. Does anyone know why you can't pass through a movie disk? Does that have to do with the encryption? Thanks!
  18. P

    Can't pass CDROM drive to VM!?!?!?!?!

    I tried to change the type to i440fx but Windows wouldn't start like that. So I installed a new VM with i440fx but the problems, sadly, persisted.
  19. P

    Can't pass CDROM drive to VM!?!?!?!?!

    Oh, I wasn't aware of that. It is a DVD I am trying to pass through in the drive. I will dig for those threads. Thanks!!!