Search results

  1. powersupport

    ceph wanring pgs not deep-scrubbed in time

    I couldn't understand the recommended configuration, can anyone advise the same, we have 7 node cluster with 35 ODSs Thank you
  2. powersupport

    ceph unexpected rebalancing

    Hi, There are sufficient resources in the server and bandwidth, we have checked and there is no bandwidth limitation. Thank you.
  3. powersupport

    ceph unexpected rebalancing

    Hi, Today, I removed a snapshot from a VM (2TB storage )in the Proxmox cluster. Shortly afterward, Ceph started rebalancing, causing all VMs to go down until I halted the removal process and the rebalancing was completed. What might have caused this issue?
  4. powersupport

    ceph wanring pgs not deep-scrubbed in time

    We’ve been receiving warnings about 621 PGs not being deep-scrubbed and 621 PGs not being scrubbed in time for a while now, and the issue doesn't seem to be resolving. Is there a command that can address this problem? There are over 600 PGs showing this error—will manually scrubbing each PG...
  5. powersupport

    proxmox backup server on PVE cluster

    This is a secondary backup for another cluster, other than this, is there any technical downside
  6. powersupport

    proxmox backup server on PVE cluster

    Is there any downside to installing the backup server in a VM in PVE? Thank you
  7. powersupport

    Inquiry about Port Speed Setting in Proxmox

    Hi, I'm reaching out to ask if there is a way to adjust port speeds within Proxmox. Could you please provide guidance on how to do this? Thank you for your help.
  8. powersupport

    Inquiry about Port Forwarding in Proxmox VMs Using the Proxmox Firewall

    Hi team, I would like to know if it's feasible to set up port forwarding in Proxmox VMs using the Proxmox firewall. Thank you.
  9. powersupport

    Unable to backup to disk

    Here is the results of it root@px-sg1-bu3:~# ls -lah /datastore/inpx total 1.1M drwxr-xr-x 4 backup backup 4.0K Jun 9 00:03 . drwxr-xr-x 7 root root 4.0K Sep 19 2023 .. drwxr-x--- 1 backup backup 1.1M Jun 27 2023 .chunks -rw-r--r-- 1 backup backup 286 Jun 9 00:03 .gc-status...
  10. powersupport

    Unable to backup to disk

    We have run the command chown -R backup:backup /datastore/inpx, but the backup is still not working, and the same error persists. Could you please help resolve this issue? Thank you.
  11. powersupport

    Unable to backup to disk

    Hi, Here is the output; please check: ls -lah /datastore root@px-sg1-bu3:~# ls -lah /datastore total 28K drwxr-xr-x 7 root root 4.0K Sep 19 2023 . drwxr-xr-x 19 root root 4.0K Oct 14 2022 .. drwxr-xr-x 4 backup backup 4.0K Jun 7 00:05 inpx drwxr-xr-x 3 backup backup 4.0K Aug 17...
  12. powersupport

    Unable to backup to disk

    Hi Team, We are getting backup failure, the error message is ERROR: VM 158 qmp command 'backup' failed - backup connect failed: command error: unable to create backup group "/datastore/inpx/vm/158" - Permission denied (os error 13) INFO: aborting backup job Please advice us to resolve this issue
  13. powersupport

    Ceph Health Warning

    HI, Currently the value standing on 633 pgs not deep-scrubbed in time 633 pgs not scrubbed in time We don't use any mechanism to automate snapshot or anything
  14. powersupport

    Ceph Health Warning

    Appreciate any help? Thank you
  15. powersupport

    Ceph Health Warning

    >ceph health detail root@px-sata-sg1-n1:~# ceph status cluster: id: 8b561bdc-3821-4059-a15d-4a63b3bce13c health: HEALTH_WARN 603 pgs not deep-scrubbed in time 603 pgs not scrubbed in time services: mon: 3 daemons, quorum...
  16. powersupport

    Ceph Health Warning

    Yes we have around 35 OSDs, below are the requested details root@px-sata-sg1-n1:~# ceph status cluster: id: 8b561bdc-3821-4059-a15d-4a63b3bce13c health: HEALTH_WARN 603 pgs not deep-scrubbed in time 603 pgs not scrubbed in time services: mon: 3...
  17. powersupport

    Ceph Health Warning

    ceph health shows warning # ceph health HEALTH_WARN 603 pgs not deep-scrubbed in time; 603 pgs not scrubbed in time
  18. powersupport

    Ceph Health Warning

    Hi Team, We are facing a Ceph health warning, please see the attached screenshot. Could you please advise us to fix this issue and please let us know why it was received? Thank you
  19. powersupport

    Investigating Consistent Network Traffic Display on Proxmox Server

    Here is the screenshot. this same screenshot displaying on the hostnode and VMs and containers, all the same