Greetings everyone, we needed to do some maintenance on a four node cluster. I moved the critical systems off of it but apparently the folks who were actually doing the removal and replacement of a node neglected to remove the noncritical VMs from it before using pvecm to remove it from the cluster. It looks like to me that they did the cluster removal correctly, which is great, but since they didn't remove the vms, now we have some VMs just floating out in null space that don't exist.
I saw this thread about trying to remove them from the filesystem manually, but that was in a two node cluster. https://forum.proxmox.com/threads/removed-node-vms-still-in-gui.65200/
I'm debating about just bringing the errant node up as a new node/new name and just leaving the old one out forever, but let's say hypothetically I wanted to remove the phantom VMs and put the node back as its old name. I'm guessing that _if_ I actually wanted to try and remove those phantom vms, I could remove /etc/pve/nodes/pve03/qemu-server/*, basically, correct? What would the steps be? Would it be as simple as
I saw this thread about trying to remove them from the filesystem manually, but that was in a two node cluster. https://forum.proxmox.com/threads/removed-node-vms-still-in-gui.65200/
I'm debating about just bringing the errant node up as a new node/new name and just leaving the old one out forever, but let's say hypothetically I wanted to remove the phantom VMs and put the node back as its old name. I'm guessing that _if_ I actually wanted to try and remove those phantom vms, I could remove /etc/pve/nodes/pve03/qemu-server/*, basically, correct? What would the steps be? Would it be as simple as
- stop pve-cluster and corosync on all remaining nodes
- pmxcfs -l
- remove the offending phantom vms
- stop pmxcfs
- restart pve-cluster and corosync
- and then bring the missing node back into the cluster?