Search results

  1. V

    you can't move a disk with snapshots and delete the source (500)

    @tom are we able to get this onto the roadmap please? is there a work around? are we able to move the disk including snapshots then delete original disk from shell/ command line? ""Cheers G
  2. V

    Full clone feature is not supported for drive 'efidisk0' (500)?

    Awesome my info on this topic is now complete :) also gives more food for thought on building a Ceph cluster.
  3. V

    Full clone feature is not supported for drive 'efidisk0' (500)?

    Thanks @fabian that makes a lot of sense :) When using Ceph is this using qcow2 disks or something different? Does running VM on Ceph storage allow for clone from snap? ""Cheers G
  4. V

    Full clone feature is not supported for drive 'efidisk0' (500)?

    Hey @fabian that’s a much better explanation thanks cor taking the time to explain. may I ask why this isn’t an issue when using standard lvm volume and file based qoc2 image format? im able to clone a snapshot. thanks in advance :) “”Cheers G
  5. V

    Full clone feature is not supported for drive 'efidisk0' (500)?

    Hey @fabian Thanks for the comments and direction. Maybe i'm missing something in my understanding of how this all layers together. Im aware that QOC2 images use COW in a similar way to ZFS uses COW, please correct me if i'm off track. With ZFS normal snapshots we have the ability to be...
  6. V

    Full clone feature is not supported for drive 'efidisk0' (500)?

    Hi @tom just to chime in im also seeing this same error in the most recent version of ProxMox. Do we have a status update in this issue? our output below: ""Cheers G # pveversion -v proxmox-ve: 6.3-1 (running kernel: 5.4.78-2-pve) pve-manager: 6.3-3 (running version: 6.3-3/eee5f901)...
  7. V

    DL380 gen 8 ZFS stripped mirror of 4x SSD poor performance

    Hey @nlubello its most likely the controller card being an issue. stick with LSI or HBA Broadcom 9300 SAS 8-port dedicated HBA controllers. Common issue with HP and Dell controllers that switch between RAID and HBA mode having their own caches and other features that interfere with ZFS...
  8. V

    KVM guests freeze (hung tasks) during backup/restore/migrate

    Hi @rakurtz I would say that the Controller cache is another level of cache that cant be controlled by ZFS while drive cache may act in a different way. Raid controller cache is specifically designed to be a middle man cache while drive cache is direct on drive. ZFS uses Ram for cache and...
  9. V

    KVM guests freeze (hung tasks) during backup/restore/migrate

    Thanks for confirming. I did a check with our new cluster and all the drive caches are on by default. these proprietary cards from Dell and HP seem to be a common thread for these types of issues Is what I’ve discovered so far. ””Cheers g
  10. V

    KVM guests freeze (hung tasks) during backup/restore/migrate

    Hi @shaneshort just curios to get a little more info on your server as i have a theory and wanted to cross check a few configs. are you still experiencing this issue? Are you ok to share the following info: brand of server Raid/ HBA card Model of server What Raid Configuration ZFS Mirror...
  11. V

    PMG Suitability and recommendations for customer / prospect

    Hey @stefanzman did you ever get an answer to your question? can only vouch for PMG being able to handle a lot of emails at a time, just last week we had seen over 6k emails hit our clients PMG with 5k of those being spam and viruses, all remaining emails quickly and cleanly filtered. the...
  12. V

    Storage replication - Avoid data loss when migrating between nodes

    hey @rafafell from my understanding proxmox replication is based on snapshots which are taken in increments at given time intervals. its not streaming replication in real time. if you are looking for failover in close to real-time with close to zero data loss then some form of shared storage...
  13. V

    rbd-mirror support

    Thanks for this reference :) so the point of difference if selected is Journal-based replication as an option compared to snapshot replication which is what ZFS uses. both are crash consistent except that "Journal-based" replication can be more accurate up to the second compared to snapshot...
  14. V

    rbd-mirror support

    OK so its DR. Is there an automatic way to trigger a fail-over or is this all manually done at this stage? So would you say its equivalent to ZFS replication between sites? Any additional positives/ negatives of using RMB-Mirroring? thanks always appreciate your input. speak soon. ""Cheers G
  15. V

    rbd-mirror support

    Hey @Alwin happy NY. im just checking in on if there has been any more detailed testing on the rbd-mirror feature in ProxMox? it’s something that has caught my eye vs ZFS replication. wondering if the replication is any better or worse for DR. would it be considered HA or DR ? ””Cheers G
  16. V

    rbd-mirror support

    Hey @hacman happy NY. just curious you mentioned DRBD are you using this per vm or per host to replicate all VM’s? are you still using Xen or have you made the switch to Proxmox? ””Cheers G
  17. V

    [SOLVED] SATA devices missing after update

    thank you. are you using ZFS with the ASMEDIA ? ""Cheers G
  18. V

    SATA kernel panics after some time on Supermicro AMD Epyc boards

    Hi @bytemine just wondering if this issue was resolved? Where you running ZFS on the onboard ASMEDIA SATA? ""Cheers G