Recent content by stats

  1. S

    new node join failed and nothing can be operated when logging into the GUI.

    I tried to add the 4th node in the cluster (vgpm04/172.19.0.14) from the GUI and but it failed in the process. Currently, nothing can be operated when logging into the GUI. We stopped the pve-cluster service and the colosync service on vgpm04, but this did not improve the situation. I send you...
  2. S

    I failed to add a new node to cluster and GUI becomes slow. How can I recover it.

    Hello, I tried to add the 4th node in the cluster (vgpm04/172.19.0.14) from the GUI and but it failed in the process. The GUI is now very slow and unusable. How can I get back to normal? I stopped the vgpm04 pve-cluster service and the colosync service, but this did not affects...
  3. S

    errors when creating and deleting snapshots of CT

    I have some additional questions regarding this. Question about "failed to open <dir>: Permission denied" when creating a snapshot 1) Will this be improved by upgrading Proxmox to 7.4 or 8.0? 2) Is there a other way to avoid the error when we use Proxmox 6.4? 3) What are the consequences of...
  4. S

    errors when creating and deleting snapshots of CT

    When I use encfs, what are the problems with creating a snapshot? Could you please tell me the solution? Also, librbd errors occur when deleting a snapshot, is this also related to encfs? Is there a solution for this too?
  5. S

    errors when creating and deleting snapshots of CT

    Dear PROXMOX support, An error occurs when creating and deleting snapshots while running a container created on PROXMOX (6.4-13). Could you tell me what the problem is? PROXMOX is a 3-node cluster and uses ceph for storage. <Error when creating snapshot> failed to open /home/.DECRYPT...
  6. S

    [SOLVED] Ceph: HEALH_WARN never ends after osd out

    ceph helth has became HEALTH_OK after destroying osd.3. Thank you very much.
  7. S

    [SOLVED] Ceph: HEALH_WARN never ends after osd out

    So, will the command recover the lost PG from other OSDs to keep 3 pg replicas?
  8. S

    [SOLVED] Ceph: HEALH_WARN never ends after osd out

    Thank you very much. I will try it. I have one more question. If I mark the pg as lost by 'mark_unfound_lost delete' command that I mentioned, is it meaningless?
  9. S

    [SOLVED] Ceph: HEALH_WARN never ends after osd out

    Do you mean just ignore the warning and continue to the replacement process? Will the recovery process start when the osd.3 is destroyed? but it is very strange why only one pg is degraded.
  10. S

    [SOLVED] Ceph: HEALH_WARN never ends after osd out

    Should I mark the pg as lost by following command? I don't know how it works. ceph pg 11.45 mark_unfound_lost delete https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-pg/
  11. S

    [SOLVED] Ceph: HEALH_WARN never ends after osd out

    # ceph osd dump epoch 676 fsid 0caf72c1-b05d-4f73-88da-ca4a2b89225f created 2017-11-29 08:33:35.211810 modified 2021-03-01 18:29:29.970358 flags sortbitwise,recovery_deletes,purged_snapdirs crush_version 19 full_ratio 0.95 backfillfull_ratio 0.9 nearfull_ratio 0.85 require_min_compat_client...
  12. S

    [SOLVED] Ceph: HEALH_WARN never ends after osd out

    # ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 39.87958 root default -3 10.77039 host vgpm01 3 hdd 7.27730 osd.3 up 0 1.00000 0 ssd 3.49309 osd.0 up...
  13. S

    [SOLVED] Ceph: HEALH_WARN never ends after osd out

    osd pool default min size = 2 osd pool default size = 3 Yes, the OSDs are online. I know. I will upgrade it after the replacement from HDD to SSD. pg 11.45 is stuck undersized for XXXXX.XXXXX, current state active+undersized+degraded, last acting [5,4] The number XXXXX.XXXXX is always...
  14. S

    [SOLVED] Ceph: HEALH_WARN never ends after osd out

    I found following messages. Is it stucked? Degraded data redundancy: 46/1454715 objects degraded (0.003%), 1 pg degraded, 1 pg undersized pg 11.45 is stuck undersized for 220401.107415, current state active+undersized+degraded, last acting [5,4]
  15. S

    [SOLVED] Ceph: HEALH_WARN never ends after osd out

    Hello, I'm trying to replace HDD to SSD. As my understanding, I let a target osd out and wait to become HEALTH_OK and destroy it to remove the current HDD physically. but after the osd out operation , HEALTH_WARN never ends. How can I fix it? My version is Virtual Environment 5.4-15 Satoshi