Search results

  1. N

    Size of 3 node cluster Proxmox VE with NVMe

    You can get max failure of probaby disks, ie whole node down. if 2 nodes die, data doesnt get corrupted, you just don't have quorum and write access to the cluster. Your cluster is ideal for ceph beginnings and just testing everything(in production :) ) .
  2. N

    High Availability + Replication = Disaster

    Then maybe you get best enterprise nvme and local storage.
  3. N

    [SOLVED] Best Practice ZFS/SW Raid HDD

    Yes, meta cache is probably up to 1% ,but usually 0.3%
  4. N

    [SOLVED] Best Practice ZFS/SW Raid HDD

    But this is not how PBS works, PBS doesnt use streaming, or anything inline, blocks are usually scattered on the whole raid, so you count random IOPS, not max.
  5. N

    [SOLVED] Best Practice ZFS/SW Raid HDD

    You won't get better write or access speed with special device. If it is hetzner, demolish the server and bring it up with Raid10 atleast.
  6. N

    [SOLVED] Best Practice ZFS/SW Raid HDD

    Yeah it will bring a bit of performance, when vieweing bacukups or similar.
  7. N

    [SOLVED] Best Practice ZFS/SW Raid HDD

    With raid5 there isnt any performance gains,whatever you do. I have one PBS with it(since beta) and this is pretty much performance from it. If you need faster turnaround get ssds ,or atleast raid10 with special device.
  8. N

    High Availability + Replication = Disaster

    Anything except CEPH in these number of nodes is by my opinion out of questions.
  9. N

    Enterprise HW Config w/ Ceph

    Don;t have exact number,but maybe something like 10 drives,10 nodes ,all flash?
  10. N

    Enterprise HW Config w/ Ceph

    I only had one client who split ceph and proxmox , it also worked well, but why i'm forcing small hyperconverged clusters , they are easier to maintain ,and ok if crash, because you usually build more small clusters.
  11. N

    Enterprise HW Config w/ Ceph

    Odd number of nodes, OSDs usually less than 10 per node ,and this should be it for smallish clusters(<15). Depending on hdd,ssd or nvme, 10g is minimum, with probably 25 or even 40gig in mind .
  12. N

    [TUTORIAL] Broadcom NICs down after PVE 8.2 (Kernel 6.8)

    I've upgraded the firmware of the card to the 226.0.145.0/pkg 226.1.107.1 , but the problem remains. So,blacklisting it is.
  13. N

    How does this sound for a 3-node Proxmox Ceph HA cluster?

    Network is okay for a small number of VMs, RAM probably is if those VMs don't user more than 10gb RAM per vm. These ssds work okay, i have a few of them in some clusters. All in all nice 3 node cluster for start.
  14. N

    AMD Ryzen 9 5950X 8.2.2 Kernel 6.8.4-3-pve crashing/rebooting every 2-3 days

    I;m working normally on Pve 8,kernel 6.8 , 5950x and nvidia 3900 , with passthrough working. Can you update BIOS?
  15. N

    Compression drastically affects Proxmox Backup Server performances

    You think you have honda,but it is rebranded trabant. If you have 100tb hdds, they work okay if you have 100 backups. But once you have 5k+ backups, you will see the pain.
  16. N

    Compression drastically affects Proxmox Backup Server performances

    Nobody told you to shut up, just only that your logic doesn't make sense. All the compession,deduplication and others things are costly, and this cost is translated to IOPS and CPU. I have installed and maintained PBS nodes from 1-140TB, some pure hdd, some ssd metadata and some full ssd. Every...
  17. N

    Lenovo SR650

    With Lenovo , we have both SR530 and some others. So usually i work on everything with internal RAID and LVMThin
  18. N

    Compression drastically affects Proxmox Backup Server performances

    That is your logic, but if your customers asks for sub-60 min recovery, you buy enterprise SSD.
  19. N

    Proxmox VE 8.2 released!

    You can rollback packages, it is just not that easy.
  20. N

    Is the Ceph replicate or duplicate files on 3 nodes

    100mbit is even unrealistic for pve-zsync.