Search results

  1. V

    Conflicting maxfiles policy and retention period

    Hi @fabian please se ebelow output. ~# cat /etc/pve/storage.cfg dir: local path /var/lib/vz content backup,vztmpl,iso lvmthin: local-lvm thinpool data vgname pve content rootdir,images cifs: backup path /mnt/pve/backup server xxxxxx...
  2. V

    Conflicting maxfiles policy and retention period

    Hi @narrateourale updates have now been installed :) Yes all same storage which is why its strange behavior. Are there any places that could have a per VM backup config that i can look at ? very strange and really need to get it sorted out :) ""Cheers G
  3. V

    Conflicting maxfiles policy and retention period

    Hi Team have a strange issue with a backup policy. Round 1. we previously had a backup policy to take a backup once per week and keep 1 backup. As things change we needed to update this policy to accommodate for more frequent backups. Round 2. The old policy was removed and a new policy...
  4. V

    ZFS over iSCSI Network HA

    ok with ZFS over iSCSI ProxMox creates the LUN per VM the correct term will be zvol, this zvol will be managed by Proxmox, snapshots managed by ProxMox etc. each zvol will have it's own snapshots using ZFS snapshot. My understanding is when you migrate VM between hosts the config file is...
  5. V

    ZFS over iSCSI Network HA

    Yes thank you that was the intention for the stack. dual controller NAS dual switches with lacp bond 10 or 25 GB switch and nic’s using lacp. will keep you posted :) appreciate you assistance. “”Cheers G
  6. V

    ZFS over iSCSI Network HA

    Hey LACP is for the network deferment and as you have advised will work :) this is more to do with controller, in theory I don’t see why it would t work it was more a question if you had tested something like the above and what the deal world results are. mall good thanks for taking the time...
  7. V

    ZFS over iSCSI Network HA

    For example you can get a TrueNas shared storage box with dual controllers running in active/ passive mode. If 1 controller fails the second takes over to minimise any potential outage, the second controller takes over in about 20 sec so iscsi if set correctly will keep working and VM should...
  8. V

    ZFS over iSCSI Network HA

    Mid im trying to get my head around the correct use case for ZFS over iSCSI. Apart from: - running on central storage - native zfs snaps - data scrubbing etc - reduce ZFS memory overhead per host to be able to run more VM’s (i think this is a big one) do live migrations work the same as normal...
  9. V

    ZFS over iSCSI Network HA

    Is this you? https://github.com/TheGrandWazoo/freenas-proxmox
  10. V

    ZFS over iSCSI Network HA

    Mir have you played with ZFS over iSCSI? i had noticed some caveats while reading forum posts but the I formation is scattered everywhere, it would be great if all this info was collated into 1 place so users could make an informed decision if this is the right protocol for them. would love to...
  11. V

    ZFS over iSCSI Network HA

    Hi All have been looking around and not found an answer yet. Ive noticed in the ProxMox Wiki which is quiet out of date that it mentioned MPIO is not available for ZFS over iSCSI. Is LACP or NIC Active/ Passive available for ZFS over ISCSI? Considering using a Free/TrueNas system with dual...
  12. V

    ZFS, iSCSI, and multipathd

    danielb what do you see as issues with host based mpio that we should look out for. ””Cheers Gerardo
  13. V

    Zfs over iscsi plugin

    ok I see how this is working now, I’m looking at it from a different perspective of the hypervisor managing the multi-pathing and only having a few LUNs I.e. max 9tb in size with 30-50 VM per lun. this would make the vm disk just a sparse file sitting on block storage formatted with something...
  14. V

    ZFS, iSCSI, and multipathd

    sure I understand your comment, but it’s relevant in how did they deal with the issue as there is obviously a solution out there that works for VMware and Hyper-v (Windows), I am aware these both have a different stack but understanding how something works even if it’s different overall may...
  15. V

    ZFS, iSCSI, and multipathd

    wondering how vmware does it? how have they dealt with the problem?
  16. V

    Max number of hosts recommended for Ceph and PVE hyperconverged

    Yes thank you agreed a min of 4 nodes :) Unfortunately the chassis for hyperconverged are pretty much 1 type of storage in the new ones from my current research. I can see where a seperate Ceph cluster would benefit from multiple storage tiers or even older servers that can accomodate for...
  17. V

    Max number of hosts recommended for Ceph and PVE hyperconverged

    only saying 10GB (Sfp+ fibre) as we already have an existing investment in switches but we could easily purchase some more larger capacity switches. The current 10GB Sfp+ bidirectional is 20 GB and if using lacp we can bind and get 40 GB from a dual 10 GB card. can Ceph work in this way? just...
  18. V

    Max number of hosts recommended for Ceph and PVE hyperconverged

    SG90 thanks for your input I guess I’m coming from past poor experiences with distributed storage in the past and it’s made me hesitant. normally we run hosts with 256 GB ram and dual 10 GB ethernet dedicated storage network. vsan only uses 1 x 10GB port but can lacp multiple ports for...
  19. V

    ZFS, iSCSI, and multipathd

    Just wondering what qemu is the show stopper here? maybe it’s my lack of understanding but shouldn’t multipathing be determined by the OS in this case Debian ? would be interested in understanding this issue in more detail. and since it’s not currently supported what’s the beat way around...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!