Search results

  1. V

    PMG Suitability and recommendations for customer / prospect

    some good points there @mmidgett is PMG able to be configured to send out via multiple IP's? haven't explored this yet would be great to be able to manually make this change if there is a block issue due to spamming. ""Cheers G
  2. V

    PMG Suitability and recommendations for customer / prospect

    May I ask why ? what’s the difference if you also send mail via subdomain ? More info please to better understand the problem. ta
  3. V

    ZFS and Ceph on same cluster

    Thanks for confirming @ph0x . Im hoping someone with this same setup can comment and confirm as well. ""Cheers G
  4. V

    ZFS and Ceph on same cluster

    hi @ph0x thanks for the reply. That’s the reason for my question, specifically in regards to mix and match with Ceph. I’m looking for a confirmation that it’s usable in this way with a mix of zfs and Ceph. using ProMox Ceph specifically opposed to an external 3rd party Ceph cluster. ta
  5. V

    Strange slowness and micro interruptions (solved but want to share)

    Hi @mgiammarco May I please ask with your dl360 G9‘s running Ceph, what storage controller are you using? we are looking at repurpose similar hardware into a Ceph cluster and still reviewing what the best controller from HP is. we have both H240 and 440 controllers available. would love...
  6. V

    ZFS and Ceph on same cluster

    Hi All just wondering is it possible to have a ProxMox cluster with both local ZFS data store per host + Ceph on the same cluster? example. 5 or 6 hosts 10 bays per host 4 bays for ZFS mirror 6 bays for Ceph is it possible to have this mixed storage design in a single cluster? From a...
  7. V

    you can't move a disk with snapshots and delete the source (500)

    @tom are we able to get this onto the roadmap please? is there a work around? are we able to move the disk including snapshots then delete original disk from shell/ command line? ""Cheers G
  8. V

    Full clone feature is not supported for drive 'efidisk0' (500)?

    Awesome my info on this topic is now complete :) also gives more food for thought on building a Ceph cluster.
  9. V

    Full clone feature is not supported for drive 'efidisk0' (500)?

    Thanks @fabian that makes a lot of sense :) When using Ceph is this using qcow2 disks or something different? Does running VM on Ceph storage allow for clone from snap? ""Cheers G
  10. V

    Full clone feature is not supported for drive 'efidisk0' (500)?

    Hey @fabian that’s a much better explanation thanks cor taking the time to explain. may I ask why this isn’t an issue when using standard lvm volume and file based qoc2 image format? im able to clone a snapshot. thanks in advance :) “”Cheers G
  11. V

    Full clone feature is not supported for drive 'efidisk0' (500)?

    Hey @fabian Thanks for the comments and direction. Maybe i'm missing something in my understanding of how this all layers together. Im aware that QOC2 images use COW in a similar way to ZFS uses COW, please correct me if i'm off track. With ZFS normal snapshots we have the ability to be...
  12. V

    Full clone feature is not supported for drive 'efidisk0' (500)?

    Hi @tom just to chime in im also seeing this same error in the most recent version of ProxMox. Do we have a status update in this issue? our output below: ""Cheers G # pveversion -v proxmox-ve: 6.3-1 (running kernel: 5.4.78-2-pve) pve-manager: 6.3-3 (running version: 6.3-3/eee5f901)...
  13. V

    DL380 gen 8 ZFS stripped mirror of 4x SSD poor performance

    Hey @nlubello its most likely the controller card being an issue. stick with LSI or HBA Broadcom 9300 SAS 8-port dedicated HBA controllers. Common issue with HP and Dell controllers that switch between RAID and HBA mode having their own caches and other features that interfere with ZFS...
  14. V

    KVM guests freeze (hung tasks) during backup/restore/migrate

    Hi @rakurtz I would say that the Controller cache is another level of cache that cant be controlled by ZFS while drive cache may act in a different way. Raid controller cache is specifically designed to be a middle man cache while drive cache is direct on drive. ZFS uses Ram for cache and...
  15. V

    KVM guests freeze (hung tasks) during backup/restore/migrate

    Thanks for confirming. I did a check with our new cluster and all the drive caches are on by default. these proprietary cards from Dell and HP seem to be a common thread for these types of issues Is what I’ve discovered so far. ””Cheers g
  16. V

    KVM guests freeze (hung tasks) during backup/restore/migrate

    Hi @shaneshort just curios to get a little more info on your server as i have a theory and wanted to cross check a few configs. are you still experiencing this issue? Are you ok to share the following info: brand of server Raid/ HBA card Model of server What Raid Configuration ZFS Mirror...
  17. V

    PMG Suitability and recommendations for customer / prospect

    Hey @stefanzman did you ever get an answer to your question? can only vouch for PMG being able to handle a lot of emails at a time, just last week we had seen over 6k emails hit our clients PMG with 5k of those being spam and viruses, all remaining emails quickly and cleanly filtered. the...
  18. V

    Storage replication - Avoid data loss when migrating between nodes

    hey @rafafell from my understanding proxmox replication is based on snapshots which are taken in increments at given time intervals. its not streaming replication in real time. if you are looking for failover in close to real-time with close to zero data loss then some form of shared storage...