Recent content by ness1602

  1. N

    "Hybrid" pool understanding

    just add test repo,there is zfs 2.4
  2. N

    Call for Evidence: Share your Proxmox experience on European Open Digital Ecosystems initiative (deadline: 3 February 2026)

    Even though I am in Europe not in EU, this is a great initiative. And I agree on Proxmox influence on virt services.
  3. N

    Proxmox Virtual Environment 9.1 available!

    ZFS 2.4 is in test atm. And it is working good at >5 machines atm.
  4. N

    Review appreciated/best practise: new pve environment with ceph

    My recommendation is to use physical links to corosync, not vlan ones. As for the other things this is okay, all links should be active-passive in proxmox, so that you don't care if something dies ,and this is it. CEPH will work great in that case, those ssds work really great in practice.
  5. N

    Best architecture for running 50+ VDSs each with its own Proxmox VE on a large Dedicated Server (2 TB RAM)

    Then okay, give them one disk(ceph) for os and one for VM and that is that. No raid inside and other bs.
  6. N

    Best architecture for running 50+ VDSs each with its own Proxmox VE on a large Dedicated Server (2 TB RAM)

    I mean it makes sense if you sell something like proxmox with 2-4cpus and 64gb of ram, so that you can spin 20 pms
  7. N

    Best architecture for running 50+ VDSs each with its own Proxmox VE on a large Dedicated Server (2 TB RAM)

    Okay you offer customers an hypervisor. It makes sense, not much but it does. I would always use CEPH for that.
  8. N

    Best architecture for running 50+ VDSs each with its own Proxmox VE on a large Dedicated Server (2 TB RAM)

    It is okay offering, i see there is financial reason for it. Unfortunately only nested proxmox is an option. But i cannot give you performance loss. Maybe install os on raid somewhere and add 2xnvme as passthrough for vmdata.
  9. N

    ZFS pool usage limits vs. available storage for dedicated LANcache VM

    He is already using ssds for it's pool. You can add mirrors to mirrors ,effectively getting raid10.
  10. N

    [SOLVED] Weird RAID configurations for redundancy

    Maybe in paranoid case you can go with raidz3, so that 3 disks can die? And set up spare .
  11. N

    VMware user here

    I've worked with a few companies who migrated huge load of RDSses so my recommendation is start with 3-node CEPH, 10g networking and work from that. Go up to let's say 10-15 nodes, then create new cluster. No problem with that.
  12. N

    Kernel 6.17 bug with megaraid-sas (HPE MR416)

    I circumvent that on supermicro with shutting down all vm and ct and then updating/upgrading everything. Try that.