Search results

  1. Q

    Fortschritt beim LXC Snapshot rollback?

    Hi, ich habe aus versehen ~4TB in einem Container gelöscht. Jetzt stelle ich einen Snapshot wieder her. Das läuft schon 1d 11h. Gibt es eine Möglichkeit rauszufinden wie der Fortschritt ist? Verwende ZFS als Dateisystem. Danke für Hilfe.
  2. Q

    Ceph recovery of HDD cluster slow

    Hi, I have a PVE cluster with 7 hosts of which each host has 2 16tb HDDs. The HDDs all use NVMEs as DB discs. There are no running VMs on the HDDs. They are only used as cold storage. A few days ago I had to swap 2 of these HDDs on PVE1. And since I already had the server open, I added two...
  3. Q

    CEPH: uneven storage allocation on OSDs?

    Hello together, I'm just experimenting with CEPH and wondering why the OSDs are so unevenly allocated. There are 7 PVE servers each with a 2TB and a 4TB NVMe. I have ec 4+3 with the hosts configured as failure domain. Does anyone have any idea if this is normal or if I should try to distribute...
  4. Q

    Ceph HDDs slow

    Hi, I am currently experimenting with Ceph on a PVE cluster with 7 hosts. Each of the hosts has two OSDs as 16TB SATA hard drives. Using dd to the HDDs I can write with speeds up to 270MB/s. The storage and client network are both connected with 10GBit/s, which I have also tested with iperf3. I...