Search results

  1. N

    pmxcfs + qcow2

    But gluster is EOL,or soon to be, just so you know it.
  2. N

    Proxmox vs Openmediavault

    wrt turion processor, i would just spin up openmediavault for a simple file-share machine.
  3. N

    PVE-ZSYNC - Any thoughts or feedback?

    Yeah snapshot-based is just similar(or same) as with pve-zsync.
  4. N

    PVE-ZSYNC - Any thoughts or feedback?

    There is some rbd-mirroring, with and without journal. So yes.
  5. N

    PVE-ZSYNC - Any thoughts or feedback?

    This makes sense,but i have to ask, if you need real low rto/rpo why don't you use ceph? as for pve-zsync, it works really great,and fast as a ssh tunnel can be, and you've got great email notification when something burns.
  6. N

    [TUTORIAL] FabU: can I use Ceph in a _very_ small cluster?

    Yes, you are right, but topic is very_small_cluster , not build your own datacenter. In a small_small clusters you dont have any redundancy,except maybe ceph switch(besides regular switch for all other traffic).
  7. N

    PVE-ZSYNC - Any thoughts or feedback?

    pve-zsync is great,i'm sing it for backups and for random files backups offsite. But if you have pbs, why don't you just setup remote PBS and push/pull?
  8. N

    [TUTORIAL] FabU: can I use Ceph in a _very_ small cluster?

    Not really, this is HA for VM/CT not networking.
  9. N

    Shared Remote ZFS Storage

    Usually pve-zsync is enough for async-replication outside of clusters, if you want that.
  10. N

    Shared Remote ZFS Storage

    In these cases waltar, people usually don't read enough, and afterwards don't test before pushing to production, and then write to forums with cries for help. CEPH is really battle-tested software, and i've worked with numerous companies who would never leave it. But on the other side, i've also...
  11. N

    Shared Remote ZFS Storage

    Until they change the license and it's "maintain your own"
  12. N

    Shared Remote ZFS Storage

    But people just don't understand that you can have that with CEPH, just try with any enterprise disks and you will see. Yes, for 25node, petabyte cluster you will need all jtreblay says, but for entry level, just plain 10g, with enterprise disks will suffice. I have a customer, 2.5g, samsung...
  13. N

    Shared Remote ZFS Storage

    I don';t get it,if ceph says minimum is 3nodes, why don't just people accept that it works(really greate sometimes) with just 3 nodes? You dont need 15k initial investment, just okay network and good drives. NExt.
  14. N

    No VMs with CEPH Storage will start after update to 8.3.2 and CEPH Squid

    I would (and am) always using active-passive bond, and next thing i would check for some firewall rules?
  15. N

    cluster performance degradation

    Check cabling, you have errors on mirror-1 and mirror-0 ,where on mirror0 total damage.
  16. N

    Ceph Ruined My Christmas

    Bring up independent proxmox clusters for each location
  17. N

    Ceph Reef:

    If i'm seeing good, the usage of ecoVolumes is really low, so they don't need autoscaling.
  18. N

    Ceph Reef:

    Can we see your pools? Usually there is a autoscaler problem if you leave some pools on default or something like that.
  19. N

    cluster performance degradation

    Yes, but these are striped mirrors, not just mirrors-mirrors, so it is easier to call them raid10(which means striped mirror). by default raidz(1,2,3) will always be slower than anything else ,because they are doing mathematical calculations for parity. Execute this on zfs pool: sudo fio...
  20. N

    cluster performance degradation

    Hardly 10% when raid5 reads are okay,but writes are the IOPS of one disk. Whereas raid10 IOPS *3