Search results

  1. N

    I/O Performance issues with Ceph

    Crucial P3s are okay for win office machines, but nothing else.
  2. N

    Ceph 19.2 Squid Stable Release and Ceph 17.2 Quincy soon to be EOL

    I've upgraded one 4-node cluster,no problems for now.
  3. N

    Is there any tool to migrate VMs from VMWARE to Proxmox?

    https://forum.proxmox.com/threads/new-import-wizard-available-for-migrating-vmware-esxi-based-virtual-machines.144023/page-20#post-708443
  4. N

    Automatic Updates

    You could update them via ansible, one by one.
  5. N

    Promox Replication DC to DRC

    If your SAN is zfs based,then yes, you could used zfs replication(storage replication in proxmox) .
  6. N

    problems with KINGSTON_SFYRD4000G disks in ceph cluster

    The problem is these Kingstons are office or gaming drives, they are not inteded for serious work. Buy better drives.
  7. N

    Size of 3 node cluster Proxmox VE with NVMe

    available = 19.2 because of max3/min2 you lose 30% by default so around 12tb, and if you don't overfill it, 10tb. As for disks 6, but if all 6 disks die at the same time, it will be hard for ceph to be green, it would probably need around 30% more space(than used). Keep that in mind.
  8. N

    Size of 3 node cluster Proxmox VE with NVMe

    Available = 19.2 Usable = 12.8 and this is max. As always , around 80% not more should be filled, so 10TB lets say. Excluding node , probably the same number of disks can die.
  9. N

    Size of 3 node cluster Proxmox VE with NVMe

    You can get max failure of probaby disks, ie whole node down. if 2 nodes die, data doesnt get corrupted, you just don't have quorum and write access to the cluster. Your cluster is ideal for ceph beginnings and just testing everything(in production :) ) .
  10. N

    High Availability + Replication = Disaster

    Then maybe you get best enterprise nvme and local storage.
  11. N

    [SOLVED] Best Practice ZFS/SW Raid HDD

    Yes, meta cache is probably up to 1% ,but usually 0.3%
  12. N

    [SOLVED] Best Practice ZFS/SW Raid HDD

    But this is not how PBS works, PBS doesnt use streaming, or anything inline, blocks are usually scattered on the whole raid, so you count random IOPS, not max.
  13. N

    [SOLVED] Best Practice ZFS/SW Raid HDD

    You won't get better write or access speed with special device. If it is hetzner, demolish the server and bring it up with Raid10 atleast.
  14. N

    [SOLVED] Best Practice ZFS/SW Raid HDD

    Yeah it will bring a bit of performance, when vieweing bacukups or similar.
  15. N

    [SOLVED] Best Practice ZFS/SW Raid HDD

    With raid5 there isnt any performance gains,whatever you do. I have one PBS with it(since beta) and this is pretty much performance from it. If you need faster turnaround get ssds ,or atleast raid10 with special device.
  16. N

    High Availability + Replication = Disaster

    Anything except CEPH in these number of nodes is by my opinion out of questions.
  17. N

    Enterprise HW Config w/ Ceph

    Don;t have exact number,but maybe something like 10 drives,10 nodes ,all flash?
  18. N

    Enterprise HW Config w/ Ceph

    I only had one client who split ceph and proxmox , it also worked well, but why i'm forcing small hyperconverged clusters , they are easier to maintain ,and ok if crash, because you usually build more small clusters.
  19. N

    Enterprise HW Config w/ Ceph

    Odd number of nodes, OSDs usually less than 10 per node ,and this should be it for smallish clusters(<15). Depending on hdd,ssd or nvme, 10g is minimum, with probably 25 or even 40gig in mind .