Search results

  1. N

    cluster performance degradation

    Use Raid10 if you want best performance, not any Raidz.
  2. N

    Transplant RAID10 Disks

    Try without name, maybe it will work. so just: zpool import or: zpool import -f something should work.
  3. N

    Transplant RAID10 Disks

    with zfs, just add disks to new installation, and do something like zpool import pool(your pool name) . This should be enough.
  4. N

    cluster performance degradation

    Just do the backups again on them,just for peace of mind.
  5. N

    cluster performance degradation

    try with qm unlock 102 or 103 on that host(pve2?)
  6. N

    Ceph Ruined My Christmas

    The CEPH is rather specific, because it is a complex software, which can kill your storage pretty fast. That is why i usually recommend bringing 3-node test cluster,and there you can molest it and everything you need, intentionally crash it etc,etc. Then when you see everything ceph can...
  7. N

    cluster performance degradation

    Contact support, or hire someone who knows what he is doing.
  8. N

    Ceph Ruined My Christmas

    i would in your case drop GPT and hire and engineer,atleast for a few hours.
  9. N

    Ceph Ruined My Christmas

    You have separate public and cluster ceph networks, are they both available? do you have a need for cluster network? if not use just public_network. Also, there are a lot of osds down, i would check the disks if they are working as expected(hddsentinel?) .Moreover, why so many pools? Usually one...
  10. N

    Ceph Ruined My Christmas

    Cluster nodes are different from ceph cluster, can you give us images from node/CEPH, if you can from one node,but all views?
  11. N

    cluster performance degradation

    2 nodes and a quorom if you don't want full third node.
  12. N

    cluster performance degradation

    Usually now (in the west) , you use 3.xTB and 7.xTB ssds or nvmes, and then you can get good density and performance.
  13. N

    cluster performance degradation

    Three is okay for start, but those hdd and this use case is a engineer error.
  14. N

    cluster performance degradation

    This is not true,if you have 4 nodes, you can only lose one, since 3 is needed for minimum quoroum.
  15. N

    cluster performance degradation

    1. Yes 1min is replication time, but you need to spin up machine on a different node when first one dies, right? 2. For ceph, 3 is min and it works great if you have fast disks. With more nodes it is even more great :)
  16. N

    cluster performance degradation

    To offload ceph writes to db/wal .