[SOLVED] What service to restart after root disk full

silvered.dragon

Renowned Member
Nov 4, 2015
123
4
83
I made a mistake in my 5 nodes ceph cluster and I selected for my new backups schedule on some nodes the root local storage and it went full, today everything works but I have no access to the gui of the affected nodes(I receive connection refused). All vms and lxc are working good. I deleted all wrong dumps in the affected nodes and storage is ok but I still cannot access those nodes. What services I have to restart to get back connectivity? If possible I don't want to restart the entire node because there is production today and since it works it's ok...
 
The service serving the API and the web interface is called pveproxy

To restart the service one can run pveproxy restart
 
Last edited:
The service serving the API and the web interface is called pveproxy

To restart the service one can run pveproxy restart
it stucks simply no output, so I tried

Code:
root@node1:~# systemctl status pveproxy
● pveproxy.service - PVE API Proxy Server
     Loaded: loaded (/lib/systemd/system/pveproxy.service; enabled; preset: enabled)
     Active: deactivating (final-sigterm) (Result: timeout) since Mon 2023-10-02 10:18:03 CEST; 45min ago
    Process: 860809 ExecStartPre=/usr/bin/pvecm updatecerts --silent (code=killed, signal=KILL)
      Tasks: 5 (limit: 154165)
     Memory: 254.4M
        CPU: 283ms
     CGroup: /system.slice/pveproxy.service
             ├─820492 /usr/bin/perl -T /usr/bin/pveproxy stop
             ├─827877 /usr/bin/perl /usr/bin/pvecm updatecerts --silent
             ├─846369 /usr/bin/perl /usr/bin/pvecm updatecerts --silent
             ├─853709 /usr/bin/perl /usr/bin/pvecm updatecerts --silent
             └─860811 /usr/bin/perl /usr/bin/pvecm updatecerts --silent

Oct 02 10:59:50 node1 systemd[1]: Starting pveproxy.service - PVE API Proxy Server...
Oct 02 11:01:20 node1 systemd[1]: pveproxy.service: start-pre operation timed out. Terminating.
Oct 02 11:02:50 node1 systemd[1]: pveproxy.service: State 'stop-sigterm' timed out. Killing.
Oct 02 11:02:50 node1 systemd[1]: pveproxy.service: Killing process 860809 (pvecm) with signal SIGKILL.
Oct 02 11:02:50 node1 systemd[1]: pveproxy.service: Killing process 820492 (pveproxy) with signal SIGKILL.
Oct 02 11:02:50 node1 systemd[1]: pveproxy.service: Killing process 827877 (pvecm) with signal SIGKILL.
Oct 02 11:02:50 node1 systemd[1]: pveproxy.service: Killing process 846369 (pvecm) with signal SIGKILL.
Oct 02 11:02:50 node1 systemd[1]: pveproxy.service: Killing process 853709 (pvecm) with signal SIGKILL.
Oct 02 11:02:50 node1 systemd[1]: pveproxy.service: Killing process 860811 (pvecm) with signal SIGKILL.

all other services seem ok too but this gives errors

Code:
root@node1:~# systemctl status corosync
● corosync.service - Corosync Cluster Engine
     Loaded: loaded (/lib/systemd/system/corosync.service; enabled; preset: enabled)
     Active: active (running) since Fri 2023-09-29 17:53:36 CEST; 2 days ago
       Docs: man:corosync
             man:corosync.conf
             man:corosync_overview
   Main PID: 3470 (corosync)
      Tasks: 9 (limit: 154165)
     Memory: 302.2M
        CPU: 1h 25min 32.540s
     CGroup: /system.slice/corosync.service
             └─3470 /usr/sbin/corosync -f

Sep 29 19:03:02 node1 corosync[3470]:   [KNET  ] pmtud: Global data MTU changed to: 1397
Sep 29 19:03:04 node1 corosync[3470]:   [QUORUM] Sync members[6]: 1 2 3 4 5 6
Sep 29 19:03:04 node1 corosync[3470]:   [QUORUM] Sync joined[1]: 6
Sep 29 19:03:04 node1 corosync[3470]:   [TOTEM ] A new membership (1.588) was formed. Members joined: 6
Sep 29 19:03:04 node1 corosync[3470]:   [QUORUM] Members[6]: 1 2 3 4 5 6
Sep 29 19:03:04 node1 corosync[3470]:   [MAIN  ] Completed service synchronization, ready to provide service.
Sep 30 12:09:31 node1 corosync[3470]:   [CPG   ] *** 0x5572e4865590 can't mcast to group pve_dcdb_v1 state:1, error:12
Sep 30 12:09:32 node1 corosync[3470]:   [CPG   ] *** 0x5572e4865590 can't mcast to group pve_dcdb_v1 state:1, error:12
Sep 30 12:10:00 node1 corosync[3470]:   [CPG   ] *** 0x5572e4865590 can't mcast to group pve_dcdb_v1 state:1, error:12
Oct 02 10:25:34 node1 corosync[3470]:   [CPG   ] *** 0x5572e4865590 can't mcast to group pve_dcdb_v1 state:1, error:12
 
Could you please post your system logs from journalctl
 
Could you please post your system logs from journalctl
sure, seems related to ceph but all vms and lxc are working

Bash:
root@node1:~# journalctl -xe
Oct 02 11:32:34 node1 ceph-osd[4449]: 2023-10-02T11:32:34.271+0200 7faacae716c0 -1 monclient: _check_auth_rotating possible clock skew, rotating keys expired way too early (before 2023-10-02T10:32:34.274937+0200)
Oct 02 11:32:34 node1 ceph-osd[4449]: 2023-10-02T11:32:34.563+0200 7faabcebc6c0 -1 monclient: _check_auth_rotating possible clock skew, rotating keys expired way too early (before 2023-10-02T10:32:34.568934+0200)
Oct 02 11:32:34 node1 ceph-osd[4455]: 2023-10-02T11:32:34.603+0200 7fe7115466c0 -1 monclient: _check_auth_rotating possible clock skew, rotating keys expired way too early (before 2023-10-02T10:32:34.607893+0200)
Oct 02 11:32:34 node1 ceph-mgr[3452]: 2023-10-02T11:32:34.915+0200 7f3047d066c0 -1 monclient: _check_auth_rotating possible clock skew, rotating keys expired way too early (before 2023-10-02T10:32:34.917818+0200)
Oct 02 11:32:35 node1 ceph-osd[4447]: 2023-10-02T11:32:35.107+0200 7f4985c646c0 -1 monclient: _check_auth_rotating possible clock skew, rotating keys expired way too early (before 2023-10-02T10:32:35.111324+0200)
Oct 02 11:32:35 node1 ceph-osd[4452]: 2023-10-02T11:32:35.251+0200 7fae82c766c0 -1 monclient: _check_auth_rotating possible clock skew, rotating keys expired way too early (before 2023-10-02T10:32:35.256827+0200)
Oct 02 11:32:35 node1 ceph-osd[4451]: 2023-10-02T11:32:35.255+0200 7fa0fefa86c0 -1 monclient: _check_auth_rotating possible clock skew, rotating keys expired way too early (before 2023-10-02T10:32:35.257133+0200)
Oct 02 11:32:35 node1 ceph-osd[4449]: 2023-10-02T11:32:35.567+0200 7faabcebc6c0 -1 monclient: _check_auth_rotating possible clock skew, rotating keys expired way too early (before 2023-10-02T10:32:35.569127+0200)
Oct 02 11:32:35 node1 ceph-osd[4455]: 2023-10-02T11:32:35.603+0200 7fe7115466c0 -1 monclient: _check_auth_rotating possible clock skew, rotating keys expired way too early (before 2023-10-02T10:32:35.608030+0200)
Oct 02 11:32:35 node1 ceph-mgr[3452]: 2023-10-02T11:32:35.915+0200 7f3047d066c0 -1 monclient: _check_auth_rotating possible clock skew, rotating keys expired way too early (before 2023-10-02T10:32:35.918020+0200)
Oct 02 11:32:36 node1 ceph-osd[4447]: 2023-10-02T11:32:36.107+0200 7f4985c646c0 -1 monclient: _check_auth_rotating possible clock skew, rotating keys expired way too early (before 2023-10-02T10:32:36.111496+0200)
Oct 02 11:32:36 node1 ceph-osd[4452]: 2023-10-02T11:32:36.255+0200 7fae82c766c0 -1 monclient: _check_auth_rotating possible clock skew, rotating keys expired way too early (before 2023-10-02T10:32:36.257078+0200)
Oct 02 11:32:36 node1 ceph-osd[4451]: 2023-10-02T11:32:36.255+0200 7fa0fefa86c0 -1 monclient: _check_auth_rotating possible clock skew, rotating keys expired way too early (before 2023-10-02T10:32:36.257319+0200)
Oct 02 11:32:36 node1 ceph-osd[4449]: 2023-10-02T11:32:36.567+0200 7faabcebc6c0 -1 monclient: _check_auth_rotating possible clock skew, rotating keys expired way too early (before 2023-10-02T10:32:36.569330+0200)
Oct 02 11:32:36 node1 ceph-osd[4455]: 2023-10-02T11:32:36.603+0200 7fe7115466c0 -1 monclient: _check_auth_rotating possible clock skew, rotating keys expired way too early (before 2023-10-02T10:32:36.608230+0200)
Oct 02 11:32:36 node1 ceph-mgr[3452]: 2023-10-02T11:32:36.915+0200 7f3047d066c0 -1 monclient: _check_auth_rotating possible clock skew, rotating keys expired way too early (before 2023-10-02T10:32:36.918256+0200)
Oct 02 11:32:37 node1 ceph-osd[4447]: 2023-10-02T11:32:37.107+0200 7f4985c646c0 -1 monclient: _check_auth_rotating possible clock skew, rotating keys expired way too early (before 2023-10-02T10:32:37.111706+0200)
Oct 02 11:32:37 node1 ceph-osd[4452]: 2023-10-02T11:32:37.255+0200 7fae82c766c0 -1 monclient: _check_auth_rotating possible clock skew, rotating keys expired way too early (before 2023-10-02T10:32:37.257241+0200)
Oct 02 11:32:37 node1 ceph-osd[4451]: 2023-10-02T11:32:37.255+0200 7fa0fefa86c0 -1 monclient: _check_auth_rotating possible clock skew, rotating keys expired way too early (before 2023-10-02T10:32:37.257530+0200)
Oct 02 11:32:37 node1 ceph-osd[4449]: 2023-10-02T11:32:37.567+0200 7faabcebc6c0 -1 monclient: _check_auth_rotating possible clock skew, rotating keys expired way too early (before 2023-10-02T10:32:37.569486+0200)
Oct 02 11:32:37 node1 ceph-osd[4455]: 2023-10-02T11:32:37.603+0200 7fe7115466c0 -1 monclient: _check_auth_rotating possible clock skew, rotating keys expired way too early (before 2023-10-02T10:32:37.608394+0200)
Oct 02 11:32:37 node1 ceph-mgr[3452]: 2023-10-02T11:32:37.915+0200 7f3047d066c0 -1 monclient: _check_auth_rotating possible clock skew, rotating keys expired way too early (before 2023-10-02T10:32:37.918485+0200)
Oct 02 11:32:38 node1 ceph-osd[4447]: 2023-10-02T11:32:38.107+0200 7f4985c646c0 -1 monclient: _check_auth_rotating possible clock skew, rotating keys expired way too early (before 2023-10-02T10:32:38.111885+0200)
Oct 02 11:32:38 node1 ceph-osd[4452]: 2023-10-02T11:32:38.255+0200 7fae82c766c0 -1 monclient: _check_auth_rotating possible clock skew, rotating keys expired way too early (before 2023-10-02T10:32:38.257416+0200)
Oct 02 11:32:38 node1 ceph-osd[4451]: 2023-10-02T11:32:38.255+0200 7fa0fefa86c0 -1 monclient: _check_auth_rotating possible clock skew, rotating keys expired way too early (before 2023-10-02T10:32:38.257692+0200)
Oct 02 11:32:38 node1 ceph-osd[4449]: 2023-10-02T11:32:38.567+0200 7faabcebc6c0 -1 monclient: _check_auth_rotating possible clock skew, rotating keys expired way too early (before 2023-10-02T10:32:38.569709+0200)
Oct 02 11:32:38 node1 ceph-osd[4455]: 2023-10-02T11:32:38.603+0200 7fe7115466c0 -1 monclient: _check_auth_rotating possible clock skew, rotating keys expired way too early (before 2023-10-02T10:32:38.608566+0200)
Oct 02 11:32:38 node1 ceph-mgr[3452]: 2023-10-02T11:32:38.915+0200 7f3047d066c0 -1 monclient: _check_auth_rotating possible clock skew, rotating keys expired way too early (before 2023-10-02T10:32:38.918717+0200)
Oct 02 11:32:39 node1 ceph-osd[4447]: 2023-10-02T11:32:39.107+0200 7f4985c646c0 -1 monclient: _check_auth_rotating possible clock skew, rotating keys expired way too early (before 2023-10-02T10:32:39.112110+0200)
Oct 02 11:32:39 node1 ceph-osd[4452]: 2023-10-02T11:32:39.255+0200 7fae82c766c0 -1 monclient: _check_auth_rotating possible clock skew, rotating keys expired way too early (before 2023-10-02T10:32:39.257587+0200)
Oct 02 11:32:39 node1 ceph-osd[4451]: 2023-10-02T11:32:39.255+0200 7fa0fefa86c0 -1 monclient: _check_auth_rotating possible clock skew, rotating keys expired way too early (before 2023-10-02T10:32:39.257831+0200)
Oct 02 11:32:39 node1 ceph-osd[4455]: 2023-10-02T11:32:39.407+0200 7fe7204fd6c0 -1 monclient: _check_auth_rotating possible clock skew, rotating keys expired way too early (before 2023-10-02T10:32:39.411721+0200)
Oct 02 11:32:39 node1 ceph-mgr[3452]: 2023-10-02T11:32:39.407+0200 7f304dd126c0 -1 monclient: _check_auth_rotating possible clock skew, rotating keys expired way too early (before 2023-10-02T10:32:39.411936+0200)
Oct 02 11:32:39 node1 ceph-osd[4449]: 2023-10-02T11:32:39.407+0200 7faacae716c0 -1 monclient: _check_auth_rotating possible clock skew, rotating keys expired way too early (before 2023-10-02T10:32:39.412239+0200)
Oct 02 11:32:39 node1 ceph-osd[4451]: 2023-10-02T11:32:39.407+0200 7fa10d75e6c0 -1 monclient: _check_auth_rotating possible clock skew, rotating keys expired way too early (before 2023-10-02T10:32:39.412690+0200)
Oct 02 11:32:39 node1 ceph-osd[4447]: 2023-10-02T11:32:39.407+0200 7f499441a6c0 -1 monclient: _check_auth_rotating possible clock skew, rotating keys expired way too early (before 2023-10-02T10:32:39.412877+0200)
Oct 02 11:32:39 node1 kernel: libceph: mon2 (1)10.0.30.3:6789 socket closed (con state OPEN)
Oct 02 11:32:39 node1 ceph-osd[4452]: 2023-10-02T11:32:39.419+0200 7fae9142c6c0 -1 monclient: _check_auth_rotating possible clock skew, rotating keys expired way too early (before 2023-10-02T10:32:39.421556+0200)
Oct 02 11:32:39 node1 ceph-osd[4449]: 2023-10-02T11:32:39.567+0200 7faabcebc6c0 -1 monclient: _check_auth_rotating possible clock skew, rotating keys expired way too early (before 2023-10-02T10:32:39.569922+0200)
Oct 02 11:32:39 node1 ceph-osd[4455]: 2023-10-02T11:32:39.603+0200 7fe7115466c0 -1 monclient: _check_auth_rotating possible clock skew, rotating keys expired way too early (before 2023-10-02T10:32:39.608737+0200)
Oct 02 11:32:39 node1 ceph-mgr[3452]: 2023-10-02T11:32:39.915+0200 7f3047d066c0 -1 monclient: _check_auth_rotating possible clock skew, rotating keys expired way too early (before 2023-10-02T10:32:39.918952+0200)
Oct 02 11:32:40 node1 ceph-osd[4447]: 2023-10-02T11:32:40.107+0200 7f4985c646c0 -1 monclient: _check_auth_rotating possible clock skew, rotating keys expired way too early (before 2023-10-02T10:32:40.112322+0200)
Oct 02 11:32:40 node1 ceph-osd[4452]: 2023-10-02T11:32:40.255+0200 7fae82c766c0 -1 monclient: _check_auth_rotating possible clock skew, rotating keys expired way too early (before 2023-10-02T10:32:40.257739+0200)
Oct 02 11:32:40 node1 ceph-osd[4451]: 2023-10-02T11:32:40.255+0200 7fa0fefa86c0 -1 monclient: _check_auth_rotating possible clock skew, rotating keys expired way too early (before 2023-10-02T10:32:40.258002+0200)
Oct 02 11:32:40 node1 ceph-osd[4449]: 2023-10-02T11:32:40.567+0200 7faabcebc6c0 -1 monclient: _check_auth_rotating possible clock skew, rotating keys expired way too early (before 2023-10-02T10:32:40.570119+0200)
Oct 02 11:32:40 node1 ceph-osd[4455]: 2023-10-02T11:32:40.603+0200 7fe7115466c0 -1 monclient: _check_auth_rotating possible clock skew, rotating keys expired way too early (before 2023-10-02T10:32:40.608877+0200)
Oct 02 11:32:40 node1 ceph-mgr[3452]: 2023-10-02T11:32:40.915+0200 7f3047d066c0 -1 monclient: _check_auth_rotating possible clock skew, rotating keys expired way too early (before 2023-10-02T10:32:40.919147+0200)
Oct 02 11:32:41 node1 ceph-osd[4447]: 2023-10-02T11:32:41.107+0200 7f4985c646c0 -1 monclient: _check_auth_rotating possible clock skew, rotating keys expired way too early (before 2023-10-02T10:32:41.112509+0200)
Oct 02 11:32:41 node1 ceph-osd[4452]: 2023-10-02T11:32:41.255+0200 7fae82c766c0 -1 monclient: _check_auth_rotating possible clock skew, rotating keys expired way too early (before 2023-10-02T10:32:41.257855+0200)
Oct 02 11:32:41 node1 ceph-osd[4451]: 2023-10-02T11:32:41.255+0200 7fa0fefa86c0 -1 monclient: _check_auth_rotating possible clock skew, rotating keys expired way too early (before 2023-10-02T10:32:41.258175+0200)
Oct 02 11:32:41 node1 ceph-osd[4449]: 2023-10-02T11:32:41.567+0200 7faabcebc6c0 -1 monclient: _check_auth_rotating possible clock skew, rotating keys expired way too early (before 2023-10-02T10:32:41.570329+0200)
Oct 02 11:32:41 node1 ceph-osd[4455]: 2023-10-02T11:32:41.607+0200 7fe7115466c0 -1 monclient: _check_auth_rotating possible clock skew, rotating keys expired way too early (before 2023-10-02T10:32:41.609124+0200)
Oct 02 11:32:41 node1 ceph-mgr[3452]: 2023-10-02T11:32:41.915+0200 7f3047d066c0 -1 monclient: _check_auth_rotating possible clock skew, rotating keys expired way too early (before 2023-10-02T10:32:41.919373+0200)
Oct 02 11:32:42 node1 ceph-osd[4447]: 2023-10-02T11:32:42.107+0200 7f4985c646c0 -1 monclient: _check_auth_rotating possible clock skew, rotating keys expired way too early (before 2023-10-02T10:32:42.112720+0200)
Oct 02 11:32:42 node1 ceph-osd[4452]: 2023-10-02T11:32:42.255+0200 7fae82c766c0 -1 monclient: _check_auth_rotating possible clock skew, rotating keys expired way too early (before 2023-10-02T10:32:42.258003+0200)
Oct 02 11:32:42 node1 ceph-osd[4451]: 2023-10-02T11:32:42.255+0200 7fa0fefa86c0 -1 monclient: _check_auth_rotating possible clock skew, rotating keys expired way too early (before 2023-10-02T10:32:42.258319+0200)
Oct 02 11:32:42 node1 ceph-osd[4449]: 2023-10-02T11:32:42.567+0200 7faabcebc6c0 -1 monclient: _check_auth_rotating possible clock skew, rotating keys expired way too early (before 2023-10-02T10:32:42.570542+0200)
Oct 02 11:32:42 node1 ceph-osd[4455]: 2023-10-02T11:32:42.607+0200 7fe7115466c0 -1 monclient: _check_auth_rotating possible clock skew, rotating keys expired way too early (before 2023-10-02T10:32:42.609291+0200)
Oct 02 11:32:42 node1 ceph-mgr[3452]: 2023-10-02T11:32:42.915+0200 7f3047d066c0 -1 monclient: _check_auth_rotating possible clock skew, rotating keys expired way too early (before 2023-10-02T10:32:42.919596+0200)
Oct 02 11:32:43 node1 ceph-osd[4447]: 2023-10-02T11:32:43.107+0200 7f4985c646c0 -1 monclient: _check_auth_rotating possible clock skew, rotating keys expired way too early (before 2023-10-02T10:32:43.112935+0200)
Oct 02 11:32:43 node1 ceph-osd[4452]: 2023-10-02T11:32:43.255+0200 7fae82c766c0 -1 monclient: _check_auth_rotating possible clock skew, rotating keys expired way too early (before 2023-10-02T10:32:43.258165+0200)
Oct 02 11:32:43 node1 ceph-osd[4451]: 2023-10-02T11:32:43.255+0200 7fa0fefa86c0 -1 monclient: _check_auth_rotating possible clock skew, rotating keys expired way too early (before 2023-10-02T10:32:43.258449+0200)
lines 939-1000/1000 (END)
 
and this is what I can see in the gui accesing from one of the working nodes
Screenshot 2023-10-02 114908.png

but as I said I can access all vms and lxcs, I'm a little scared because of what will happen to ceph if I reboot the 2 nodes.
 
Last edited:
Do all the nodes have the same system time?
 
Do all the nodes have the same system time?
yes they have and I tried systemctl restart chronyd on all nodes and nothing changed, so I tried on the affected nodes

Bash:
systemctl restart ceph-mon@node1.service
systemctl restart ceph-mon@node2.service

and now I can see healthy ceph cluster on the not affected node, but other nodes are still not manageble from gui. so I tried

Bash:
root@node1:/tmp# systemctl

ahslog.service
apt-daily-upgrade.service
apt-daily.service
logrotate.service
man-db.service

and the listed serices are red, I tried to restart ahslog but is still red
 
yes they have and I tried systemctl restart chronyd on all nodes and nothing changed, so I tried on the affected nodes

Bash:
systemctl restart ceph-mon@node1.service
systemctl restart ceph-mon@node2.service

and now I can see healthy ceph cluster on the not affected node, but other nodes are still not manageble from gui. so I tried

Bash:
root@node1:/tmp# systemctl

ahslog.service
apt-daily-upgrade.service
apt-daily.service
logrotate.service
man-db.service

and the listed serices are red, I tried to restart ahslog but is still red
 
I tried to restart logrotate and I managed to restart all other red services except ahslog that is still red so I tried
root@node1:/tmp# systemctl status ahslog × ahslog.service - Active Health Service Logger Loaded: loaded (/lib/systemd/system/ahslog.service; enabled; preset: enabled) Active: failed (Result: core-dump) since Mon 2023-10-02 12:31:45 CEST; 1min 20s ago Duration: 1.602s Process: 949590 ExecStart=/sbin/ahslog -f $OPTIONS (code=dumped, signal=SEGV) Main PID: 949590 (code=dumped, signal=SEGV) CPU: 139ms Oct 02 12:31:45 node1 systemd[1]: ahslog.service: Scheduled restart job, restart counter is at 5. Oct 02 12:31:45 node1 systemd[1]: Stopped ahslog.service - Active Health Service Logger. Oct 02 12:31:45 node1 systemd[1]: ahslog.service: Start request repeated too quickly. Oct 02 12:31:45 node1 systemd[1]: ahslog.service: Failed with result 'core-dump'. Oct 02 12:31:45 node1 systemd[1]: Failed to start ahslog.service - Active Health Service Logger.
 
sorry this ahslog is something related to HPE services probably it wasn't working even before, so alla services are ok, ceph health is ok but If I systemctl restart pveproxy console will stuck again
 
Ok I fixed it without rebooting, so just for anyone facing same problems after full root local disk in a ceph cluster, if you want to turn things good without rebooting the servers my procedure was:

  • restart all mons on affected servers i.e.
    systemctl restart ceph-mon@node1.service
    systemctl restart ceph-mon@node3.service
    etc..
  • restart all stucked services except pvedaemon and pveproxy
    systemctl restart logrotate.service
    systemctl restart apt-daily-upgrade.service
    systemctl restart apt-daily.service
    systemctl restart man-db
  • restart pve-cluster service
    systemctl restart pve-cluster
  • if you have some vms or cts with icon disk like a stucked backup, unlock them
    qm unlcok VMid
    ct unlock CTid

 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!