Move VM and CT to another host

TomNick

Member
Jul 16, 2021
36
2
13
63
Hello everybody, I am running 3 instances in a cluster. Every VM or CT on instance 1 and 2 are running their root disk on a shared drive on instance 3. Now instance 3 is messed up and I would like to move all the shared root disks to the local disk on instance 1 and 2. So I did copy all the files to the "dump" dir of the other instances and tried:
Code:
qmrestore /var/lib/vz/dump/vm-108-disk-0.raw 108

I got

Code:
cluster not ready - no quorum?

So tried:

Code:
qm unlock 108

brought me:

Code:
Configuration file 'nodes/pve1/qemu-server/108.conf' does not exist

So tried to delete the VM and got:


Code:
rying to acquire cfs lock 'storage-pve-shared' ...
trying to acquire cfs lock 'storage-pve-shared' ...
trying to acquire cfs lock 'storage-pve-shared' ...
trying to acquire cfs lock 'storage-pve-shared' ...
trying to acquire cfs lock 'storage-pve-shared' ...
trying to acquire cfs lock 'storage-pve-shared' ...
trying to acquire cfs lock 'storage-pve-shared' ...
trying to acquire cfs lock 'storage-pve-shared' ...
trying to acquire cfs lock 'storage-pve-shared' ...
TASK ERROR: cfs-lock 'storage-pve-shared' error: no quorum!

I am stuck right now, anybody any idea how to handle my problem? Thanks in advance.
 
You should check why there is no quorum. Without it PVE will be read-only. 2 of your 3 nodes are still up and running?
 
You should check why there is no quorum. Without it PVE will be read-only. 2 of your 3 nodes are still up and running?
Yes they are up and running but the VMS and CTS are "dead" since the root data are located on node 3
 
You have to fix the cluster first. Without quorum those two remaining PVE nodes will be read-only.
 
Check your cluster status. With 2 of 3 votes you should got quorum so probably something isn't working.
Code:
Name:             T
Config Version:   5
Transport:        knet
Secure auth:      on

Quorum information
------------------
Date:             Thu Nov 30 14:12:51 2023
Quorum provider:  corosync_votequorum
Nodes:            2
Node ID:          0x00000001
Ring ID:          1.341
Quorate:          No

Votequorum information
----------------------
Expected votes:   5
Highest expected: 5
Total votes:      2
Quorum:           3 Activity blocked
Flags:           

Membership information
----------------------
    Nodeid      Votes Name
0x00000001          1 192.168.35.205 (local)
0x00000002          1 192.168.35.206

I got this one which I do not understand.....
 
Then you probably edited your number of votes to more than 1 vote per node.
5 votes are expected and you only got 2 votes from the remaining nodes and 3 of 5 votes are required for quorum.

In case you were never running a 5 node clustee you maybe gave the failed node 3 votes so it could run stand alone?
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!