I have a 3 node host cluster and I'm running ceph across them. I screwed up and let the storage on the ceph cluster hit 100%. The storage is still available to the running VMs, but I can't take backups, I can't move the machine's disks to other storage, and if I shut down the running VMs they won't start again. My critical machines are running at the moment but I don't know how long that will last. Running ceph -s from the command line on the hosts just hangs. Running "systemctl status ceph\*.service ceph\*.target" shows that that all of the services except for the monitor daemon on one of the hosts is running. Is there any way to recover from this? I will gladly delete some extraneous disks from VMs in the ceph cluster, but I can't get to the ceph cluster to clear up the space. Any ideas?