Search results

  1. V

    ZFS, iSCSI, and multipathd

    danielb what do you see as issues with host based mpio that we should look out for. ””Cheers Gerardo
  2. V

    Zfs over iscsi plugin

    ok I see how this is working now, I’m looking at it from a different perspective of the hypervisor managing the multi-pathing and only having a few LUNs I.e. max 9tb in size with 30-50 VM per lun. this would make the vm disk just a sparse file sitting on block storage formatted with something...
  3. V

    ZFS, iSCSI, and multipathd

    sure I understand your comment, but it’s relevant in how did they deal with the issue as there is obviously a solution out there that works for VMware and Hyper-v (Windows), I am aware these both have a different stack but understanding how something works even if it’s different overall may...
  4. V

    ZFS, iSCSI, and multipathd

    wondering how vmware does it? how have they dealt with the problem?
  5. V

    Max number of hosts recommended for Ceph and PVE hyperconverged

    Yes thank you agreed a min of 4 nodes :) Unfortunately the chassis for hyperconverged are pretty much 1 type of storage in the new ones from my current research. I can see where a seperate Ceph cluster would benefit from multiple storage tiers or even older servers that can accomodate for...
  6. V

    Max number of hosts recommended for Ceph and PVE hyperconverged

    only saying 10GB (Sfp+ fibre) as we already have an existing investment in switches but we could easily purchase some more larger capacity switches. The current 10GB Sfp+ bidirectional is 20 GB and if using lacp we can bind and get 40 GB from a dual 10 GB card. can Ceph work in this way? just...
  7. V

    Max number of hosts recommended for Ceph and PVE hyperconverged

    SG90 thanks for your input I guess I’m coming from past poor experiences with distributed storage in the past and it’s made me hesitant. normally we run hosts with 256 GB ram and dual 10 GB ethernet dedicated storage network. vsan only uses 1 x 10GB port but can lacp multiple ports for...
  8. V

    ZFS, iSCSI, and multipathd

    Just wondering what qemu is the show stopper here? maybe it’s my lack of understanding but shouldn’t multipathing be determined by the OS in this case Debian ? would be interested in understanding this issue in more detail. and since it’s not currently supported what’s the beat way around...
  9. V

    Zfs over iscsi plugin

    Hi Team was wondering if there is any movement on the zfs over iscsi plugin for multipath? https://pve.proxmox.com/wiki/Storage:_ZFS_over_iSCSI it looks like this wiki page was last updated on FreeBSD v10 which is a little old now. seeing if there are any updates to this plugin? “”Cheers G
  10. V

    Max number of hosts recommended for Ceph and PVE hyperconverged

    I don’t have a specific number in mind I’m looking into best fit and costing compared to other solutions. we are using vsan on our VMware deployment and thinking what’s a good equivalent for ProxMox. have been looking at linbit drbd and other solutions like truenas which would would use the...
  11. V

    Max number of hosts recommended for Ceph and PVE hyperconverged

    Hi Team im investigating using Ceph and PVE hyperconverged, reading over the documentation there are some anecdotal numbers for small deployments, what consists of a small deployment is this the minimum 3 hosts? whats the max recommendation for number of hosts that can be run in a cluster...
  12. V

    Wish: Amazon S3 integration

    Hi LnxBil thanks for the reference post, had a quick look at the script and its quiet straight forward. What i would prefer is to have these additional features: incorporate some sort of process for error checking once the backup is uploaded (verify) Another process here that just slipped my...
  13. V

    Wish: Amazon S3 integration

    hi Alwin is this something we could engage the ProxMox team to develop? ””Cheers G
  14. V

    Hey thanks for sharing :) any issues with v6 PVE and 3par? about to update a server shortly...

    Hey thanks for sharing :) any issues with v6 PVE and 3par? about to update a server shortly but it’s stand alone so may not encounter all the issues others hav been reporting with corosync. ””cheers g
  15. V

    Proxmox cluster + ceph recommendations

    I'm wondering if this article can explain why its now working? https://pve.proxmox.com/wiki/Software_RAID i've opened a ticket with Proxmox about software raid support and what options we have as far as the above link shows its apparently not supported :( let's see what they come back with...
  16. V

    Proxmox cluster + ceph recommendations

    Hi a4t3rburn3r i've done some digging on a few different fronts and also have some questions before we can determine what's causing the LVM rebuild issue. in relation to hardware Riad for NVME it can be done at the cost of the bus speed, so having to traverse the PCI bus will mean you won't...
  17. V

    Proxmox cluster + ceph recommendations

    Hi a4t3rburn3r thanks for pointing out what you where referring to reguargind the LVM Raid and i agree with some of your points. In relation to the hardware raid for NVME every vendor we have spoken with for quotes recently (in the last 3 weeks) Dell, HPE and SuperMicro none of them offer a...
  18. V

    Proxmox cluster + ceph recommendations

    I believe this clearly explains why your LVM is failing as it’s not actually raid it’s an extension layer to increase space between physical drives. LVM no redundancy. I.e. 10 + 10 = 20 Raid 1 mirror replication 10 + 10 = 10 Raid 5 parity 1 drive for fault tolerance 10 + 10 + 10 = 20 Etc...
  19. V

    Proxmox cluster + ceph recommendations

    Hi a4t3rburn3r Yes software raid in my opinion is pretty useless and needs refinement. I would stick with a hardware raid option of using ssd. Eject faulty > insert replacement > rebuild hands free > job done. Zfs raid is better but too many overheads if your looking for performance. If...
  20. V

    Proxmox cluster + ceph recommendations

    Hi a4t3rburn3r you wouldn't use ZFS on top of DRBD there will be a performance hit by ZFS and yes you are correct other memory overhead to consider + more to manage. Did you use software RAID or Hardware Raid? With hardware raid just replace the drive and it should just auto rebuild with no...