Search results

  1. V

    ZFS over iSCSI Network HA

    Hey LACP is for the network deferment and as you have advised will work :) this is more to do with controller, in theory I don’t see why it would t work it was more a question if you had tested something like the above and what the deal world results are. mall good thanks for taking the time...
  2. V

    ZFS over iSCSI Network HA

    For example you can get a TrueNas shared storage box with dual controllers running in active/ passive mode. If 1 controller fails the second takes over to minimise any potential outage, the second controller takes over in about 20 sec so iscsi if set correctly will keep working and VM should...
  3. V

    ZFS over iSCSI Network HA

    Mid im trying to get my head around the correct use case for ZFS over iSCSI. Apart from: - running on central storage - native zfs snaps - data scrubbing etc - reduce ZFS memory overhead per host to be able to run more VM’s (i think this is a big one) do live migrations work the same as normal...
  4. V

    ZFS over iSCSI Network HA

    Is this you? https://github.com/TheGrandWazoo/freenas-proxmox
  5. V

    ZFS over iSCSI Network HA

    Mir have you played with ZFS over iSCSI? i had noticed some caveats while reading forum posts but the I formation is scattered everywhere, it would be great if all this info was collated into 1 place so users could make an informed decision if this is the right protocol for them. would love to...
  6. V

    ZFS over iSCSI Network HA

    Hi All have been looking around and not found an answer yet. Ive noticed in the ProxMox Wiki which is quiet out of date that it mentioned MPIO is not available for ZFS over iSCSI. Is LACP or NIC Active/ Passive available for ZFS over ISCSI? Considering using a Free/TrueNas system with dual...
  7. V

    ZFS, iSCSI, and multipathd

    danielb what do you see as issues with host based mpio that we should look out for. ””Cheers Gerardo
  8. V

    Zfs over iscsi plugin

    ok I see how this is working now, I’m looking at it from a different perspective of the hypervisor managing the multi-pathing and only having a few LUNs I.e. max 9tb in size with 30-50 VM per lun. this would make the vm disk just a sparse file sitting on block storage formatted with something...
  9. V

    ZFS, iSCSI, and multipathd

    sure I understand your comment, but it’s relevant in how did they deal with the issue as there is obviously a solution out there that works for VMware and Hyper-v (Windows), I am aware these both have a different stack but understanding how something works even if it’s different overall may...
  10. V

    ZFS, iSCSI, and multipathd

    wondering how vmware does it? how have they dealt with the problem?
  11. V

    Max number of hosts recommended for Ceph and PVE hyperconverged

    Yes thank you agreed a min of 4 nodes :) Unfortunately the chassis for hyperconverged are pretty much 1 type of storage in the new ones from my current research. I can see where a seperate Ceph cluster would benefit from multiple storage tiers or even older servers that can accomodate for...
  12. V

    Max number of hosts recommended for Ceph and PVE hyperconverged

    only saying 10GB (Sfp+ fibre) as we already have an existing investment in switches but we could easily purchase some more larger capacity switches. The current 10GB Sfp+ bidirectional is 20 GB and if using lacp we can bind and get 40 GB from a dual 10 GB card. can Ceph work in this way? just...
  13. V

    Max number of hosts recommended for Ceph and PVE hyperconverged

    SG90 thanks for your input I guess I’m coming from past poor experiences with distributed storage in the past and it’s made me hesitant. normally we run hosts with 256 GB ram and dual 10 GB ethernet dedicated storage network. vsan only uses 1 x 10GB port but can lacp multiple ports for...
  14. V

    ZFS, iSCSI, and multipathd

    Just wondering what qemu is the show stopper here? maybe it’s my lack of understanding but shouldn’t multipathing be determined by the OS in this case Debian ? would be interested in understanding this issue in more detail. and since it’s not currently supported what’s the beat way around...
  15. V

    Zfs over iscsi plugin

    Hi Team was wondering if there is any movement on the zfs over iscsi plugin for multipath? https://pve.proxmox.com/wiki/Storage:_ZFS_over_iSCSI it looks like this wiki page was last updated on FreeBSD v10 which is a little old now. seeing if there are any updates to this plugin? “”Cheers G
  16. V

    Max number of hosts recommended for Ceph and PVE hyperconverged

    I don’t have a specific number in mind I’m looking into best fit and costing compared to other solutions. we are using vsan on our VMware deployment and thinking what’s a good equivalent for ProxMox. have been looking at linbit drbd and other solutions like truenas which would would use the...
  17. V

    Max number of hosts recommended for Ceph and PVE hyperconverged

    Hi Team im investigating using Ceph and PVE hyperconverged, reading over the documentation there are some anecdotal numbers for small deployments, what consists of a small deployment is this the minimum 3 hosts? whats the max recommendation for number of hosts that can be run in a cluster...
  18. V

    Wish: Amazon S3 integration

    Hi LnxBil thanks for the reference post, had a quick look at the script and its quiet straight forward. What i would prefer is to have these additional features: incorporate some sort of process for error checking once the backup is uploaded (verify) Another process here that just slipped my...
  19. V

    Wish: Amazon S3 integration

    hi Alwin is this something we could engage the ProxMox team to develop? ””Cheers G
  20. V

    Hey thanks for sharing :) any issues with v6 PVE and 3par? about to update a server shortly...

    Hey thanks for sharing :) any issues with v6 PVE and 3par? about to update a server shortly but it’s stand alone so may not encounter all the issues others hav been reporting with corosync. ””cheers g