I have some additional questions regarding this.
Question about "failed to open <dir>: Permission denied" when creating a snapshot
1) Will this be improved by upgrading Proxmox to 7.4 or 8.0?
2) Is there a other way to avoid the error when we use Proxmox 6.4?
3) What are the consequences of...
When I use encfs, what are the problems with creating a snapshot?
Could you please tell me the solution?
Also, librbd errors occur when deleting a snapshot, is this also related to encfs?
Is there a solution for this too?
Dear PROXMOX support,
An error occurs when creating and deleting snapshots while running a container created on PROXMOX (6.4-13).
Could you tell me what the problem is?
PROXMOX is a 3-node cluster and uses ceph for storage.
<Error when creating snapshot>
failed to open /home/.DECRYPT...
Thank you very much. I will try it.
I have one more question. If I mark the pg as lost by 'mark_unfound_lost delete' command that I mentioned, is it meaningless?
Do you mean just ignore the warning and continue to the replacement process?
Will the recovery process start when the osd.3 is destroyed? but it is very strange why only one pg is degraded.
Should I mark the pg as lost by following command? I don't know how it works.
ceph pg 11.45 mark_unfound_lost delete
https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-pg/
# ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 39.87958 root default
-3 10.77039 host vgpm01
3 hdd 7.27730 osd.3 up 0 1.00000
0 ssd 3.49309 osd.0 up...
osd pool default min size = 2
osd pool default size = 3
Yes, the OSDs are online.
I know. I will upgrade it after the replacement from HDD to SSD.
pg 11.45 is stuck undersized for XXXXX.XXXXX, current state active+undersized+degraded, last acting [5,4]
The number XXXXX.XXXXX is always...
I found following messages. Is it stucked?
Degraded data redundancy: 46/1454715 objects degraded (0.003%), 1 pg degraded, 1 pg undersized
pg 11.45 is stuck undersized for 220401.107415, current state active+undersized+degraded, last acting [5,4]
Hello,
I'm trying to replace HDD to SSD.
As my understanding, I let a target osd out and wait to become HEALTH_OK and destroy it to remove the current HDD physically.
but after the osd out operation , HEALTH_WARN never ends. How can I fix it?
My version is Virtual Environment 5.4-15
Satoshi
The server was forced to become 'FENCE' state again after these errors. So, I rebooted the server.
This is second time I got the email the server was fenced.
I upgraded the latest version. I have 3 nodes. and one of 3 nodes got following errors.
Aug 28 05:08:24 pxmx03 pmxcfs[2643]: [libqb] error: couldn't create file for mmap
Aug 28 05:08:24 pxmx03 pmxcfs[2643]: [libqb] error: qb_rb_open:pve2-request-2643-28396-1022: Too many open files (24)
Aug 28...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.