>ceph health detail
root@px-sata-sg1-n1:~# ceph status
cluster:
id: 8b561bdc-3821-4059-a15d-4a63b3bce13c
health: HEALTH_WARN
603 pgs not deep-scrubbed in time
603 pgs not scrubbed in time
services:
mon: 3 daemons, quorum...
Yes we have around 35 OSDs, below are the requested details
root@px-sata-sg1-n1:~# ceph status
cluster:
id: 8b561bdc-3821-4059-a15d-4a63b3bce13c
health: HEALTH_WARN
603 pgs not deep-scrubbed in time
603 pgs not scrubbed in time
services:
mon: 3...
Hi Team,
We are facing a Ceph health warning, please see the attached screenshot. Could you please advise us to fix this issue and please let us know why it was received?
Thank you
We have a Proxmox node containing several VMs. In the summary section under network traffic, all the VMs display the same network usage as the node. It should be different for each VM. How can we resolve this issue?
HI,
Could it be a flaw in PVE version 8? Despite setting lower values, the usage consistently reaches around 140 MB/s, and the usage seems to persist regardless of the values we configure.
@fabian
Hi,
we have VM that using outgong traffic arounf 140 mbp/s, which is much higher that our limits, we have set many values in the netwok limit such as 12.5 ,5 0.5 etc none seems working, can anyone advice on this? tried different models virtIO, intel E1000 all.
++++
qm config 119
agent: 1...
We are facing the same error when upgraded the proxmox to 8.1.3
HEALTH_WARN: 1 pool(s) do not have an application enabled
application not enabled on pool 'scbench'
use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.