Recent content by tawh

  1. T

    duplicate image name under ceph rbd

    My ceph cluster suddenly warned about disk full. I investigated the usage of the rbd volume and found that there are two images with the same name: NAME PROVISIONED USED vm-110-disk-0 4 GiB 1.5 GiB vm-110-disk-1 6 TiB 5.5 TiB <---- vm-2000-disk-0 276 MiB 84...
  2. T

    hookscript execution order during migration

    Hello, is there anyone have experience on using hookscript for migration?
  3. T

    hookscript execution order during migration

    I have written a hookscript for monitoring the network change after starting or stopping of some VM. It works for actual startup and shutdown of VM on a PVE. However, when I do the migration between to PVEs, only the target PVE calls the hookscript with the pre-start and post-start parameters...
  4. T

    CEPH equivalent configuration for multiple hosts with local RAID

    Thanks for your reply. To verify the correctness of my understanding, If I deploy this policy to 3 hosts and form a pool with 9 OSDs (as mentioned in Post #1), will CEPH replicate data to the other 2 hosts? or it just pick an arbitrary host to store the data?
  5. T

    CEPH equivalent configuration for multiple hosts with local RAID

    I set a 3 proxmox hosts cluster and each host has 1x10TB HDD, CEPH configured with replication mode so effective storage in cluster is 10TB. As the IO performance is poor, I am thinking to replace each 10TB disks with 3x6TB disks, I want maintain the host redundancy level like before but enable...
  6. T

    RDB mirroring slow in proxmox

    Wow, I forgot this thread as no one replied for several weeks. Really Thanks for bringing it out again. I also tried the play with those parameters but no help. As a result, I gave up the rbd_mirror and used linstor.
  7. T

    Proxmox VE 6.2 prevent last nodes to reboot with HA enabled

    Today I tried on the third node (pve3), a VM installed in the local-lvm and put it to the HA resource manager, I created a HA group which only contain pve3. When I click "shutdown" over the pve3, the log in the GUI showed "Stop all VMs and Containers" and then wait forever again. On the...
  8. T

    Proxmox VE 6.2 prevent last nodes to reboot with HA enabled

    I have three node cluster with CEPH and DRBD installed as shared storage (CEPH is used entirely for the linstor controller, a long story to tell and not discussed in this thread). I created two VMs, one "vm:100" on CEPH (as said, hosting linstor controller) and one "vm:110" on DRBD (appliance by...
  9. T

    RDB mirroring slow in proxmox

    Is there any member can share any experience on rbd mirror ? If rbd mirror is not practical regarding its performance, is there any block level real time replication tools can be used in proxmox? Thanks.
  10. T

    RDB mirroring slow in proxmox

    Thus such deviation brings the speed replaying of the mirror to slump to about 1/16 of that of bootstrapping ? From the network utilization graph, The bandwidth used for replaying was very stable to near ~30Mbps. By the way, with the best or optimum configuration, what is the speed of both...
  11. T

    RDB mirroring slow in proxmox

    rados bench 60 write -b 4M -t 16 --no-cleanup Cluster A: Total time run: 60.3526 Total writes made: 2798 Write size: 4194304 Object size: 4194304 Bandwidth (MB/sec): 185.443 Stddev Bandwidth: 66.8916 Max bandwidth (MB/sec): 352 Min bandwidth...
  12. T

    RDB mirroring slow in proxmox

    Thanks for your reply. I understand the "replay" phenomenon. But the matter of fact is I can write to the primary CEPH storage at a speed of 480Mbps while the secondary CEPH can only replays at ~30Mbps. I also configured a dual mirror which I can do the reverse, but the result are the same. I...
  13. T

    RDB mirroring slow in proxmox

    Hello all, I have two proxmox clusters, namely A and B (updated to latest edition) Cluster A: 3 hosts, 2 hosts with single 10T disk and 256GB SSD as OS, bluestore and bcache. Another host only have minimal hardware configuration for hosting proxmox for cluster quorum only, not intended to host...