Search results

  1. D

    Is PBS using single thread only?

    the issue he is facing with the VM is the vm cpu is probably set to KVM and AES Flag need to be ON for TLS , but not supported on KVM Cpu type , so change your Vcpu to Type = Host and enable AES flag
  2. D

    Slow ZRAID2 ? - Your Advice on Tweaks i might have missed

    Its s physical server 2x e5-2680 v2 While doing the bench test i see only 1 thread in each cpu so it use only 1 core on each socket That is what i see from htop while doing the test
  3. D

    Slow ZRAID2 ? - Your Advice on Tweaks i might have missed

    We have 4x PM863a samsung 3.84 TB 3d-Nand TLC in ZRAID2 backed by a HBA from HPE , we believe the performance are really slow. is there anything we can do to improve the overall performance ? we enabled Trim with no change . the PBS as 128GB and use around 20-30 %
  4. D

    PBS slow backup

    any benefit of adding zpecial device when running 100% ssd ? is saw somewhere we can issue a command for caching on the ZFS pool, for entreprise SSD who handle write aknowledge automaticaly i dont find it
  5. D

    Slow Snapshots?

    I just upgraded and confirm the issue is now gone. Thx
  6. D

    Slow Snapshots?

    hi fabian we are having issue where a snapshot for a 2 disk vm qcow2 ( 1TB 2TB ) total of 3 with 32gb ram , vm as 10gb ram in use, but the progress is crossing the 10gb usage, on a host with Shared GFS2 file system and 128gb ram (75% free) take for ever. so far writing this msg i always killed...
  7. D

    ZFS over ISCSI using THEGRANDWAZOO script with TRUENAS SCALE

    Can you share the script and procedure with the team :)
  8. D

    ZFS over ISCSI using THEGRANDWAZOO script with TRUENAS SCALE

    not been able to use it with Truenas Scale correctly yet. we are using CEPH for now
  9. D

    move osd to another node

    can we import it in a completly other cluster node , or it as to be part from a previous member node from the same cluster ?
  10. D

    move osd to another node

    Hi @fabian by journaling you refer to metaserver when using CephFS ? so i assume it do not apply to RBD?
  11. D

    ZFS over ISCSI using THEGRANDWAZOO script with TRUENAS SCALE

    im surprise there is not more intereset for getting a plugin woking with proxmox and truenas.
  12. D

    Replication on CEPH RBD - are we going to see this soon?

    Hi , Proxmox is as killer hypervisor for competition , everybody know that but one of the most important feature that we lack for is REPLICATION of virtual machine on other file system than ZFS, why is this still not available ? we have alot of VM we cant move from HYPER-V and Vmware because of...
  13. D

    [SOLVED] cephFS , can we some how force allow Vm image for QM-REMOTE-MIGRATE purpose?

    worked like charm thx. question: is it normal that that each successfuly live migrated VM , left the old one in a lock migrate state ? this appened on each of my test and migration task job report OK ( no error )
  14. D

    [SOLVED] cephFS , can we some how force allow Vm image for QM-REMOTE-MIGRATE purpose?

    is there a way to Enable VM image on a cephFS pool ? @fabian
  15. D

    [SOLVED] cephFS , can we some how force allow Vm image for QM-REMOTE-MIGRATE purpose?

    Hi sorry I was referring to qm-remote-migrate. We can't do qcow2 file to RBD block with the qm remote . It always fail as unsupported. Even if it work from the same cluster when moving disk. So I need to send the qcow2 to another filesystem and vmdisk seem disabled in cephfs and it's the...
  16. D

    [SOLVED] Shared SAS storage for Proxmox cluster 2023

    Correct we have been used proxmox with HPE SAS shared SAS for years. GFS2 file system with DLM . but its tricky there alot of learning curve. that is why we moved all new scenario to CPEH to ease the maintenances
  17. D

    [SOLVED] cephFS , can we some how force allow Vm image for QM-REMOTE-MIGRATE purpose?

    senario. we want to move VM from SHARED SAS storage on CLUSTER1 to RBD on CLUSTER2 the vm image on cluster1 are Qcow2, i get a error saying RBD is not supported. my only solution is to use a tmp file system from ceph and move the vm there at step 1 and switch them on RBD at step 2 but...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!