migration failure

proxnoci

Member
Jan 15, 2023
43
4
8
I setup a cluster old system (pve1) + new systems (pve2, pve3)
pve2 & pve3 also share a CEPH disk. pve2 & pve3 have been setup the same except for IP addresses, same VLAN's Same names....
VG & LV's are all the same pve.

Containers were succesfully moved from either pve1 -> pve2 or pve1 -> pve3 by relocating storage to CEPH volume (named machines)
and then migrating to one of the new systems. (no problems there after relocating the volumes to machines).
(except for serial numbers of components pve2 & pve3 are the same configuration, core i7/ 64GB / sata 2TB (for booting, local, local-lvm ISO etc), NVMe 2TB (for Ceph)

When attempting to move a container from pve2 -> pve3 something like below occurs as error:

Code:
2023-02-12 22:31:49 starting migration of CT 109 to node 'pve3' (192.168.xx.xxx)
  WARNING: Not using device /dev/sdb for PV vWQlhY-quYm-7R9P-uh7R-HV7k-ueho-DfegJH.
  WARNING: PV vWQlhY-quYm-7R9P-uh7R-HV7k-ueho-DfegJH prefers device /dev/sdc because device name matches previous.
  WARNING: Not using device /dev/sdb for PV vWQlhY-quYm-7R9P-uh7R-HV7k-ueho-DfegJH.
  WARNING: PV vWQlhY-quYm-7R9P-uh7R-HV7k-ueho-DfegJH prefers device /dev/sdc because device was seen first.
  Command failed with status code 5.
command '/sbin/vgscan --ignorelockingfailure --mknodes' failed: exit code 5
2023-02-12 22:31:50 ERROR: no such logical volume pve-data/pve-vms
2023-02-12 22:31:50 aborting phase 1 - cleanup resources
2023-02-12 22:31:50 start final cleanup
2023-02-12 22:31:50 ERROR: migration aborted (duration 00:00:01): no such logical volume pve-data/pve-vms
TASK ERROR: migration aborted

CT 109:
Memory1GB
Swap512MB
Cores4
Root Diskmachines:vm-109-disk-0,size-8G
Mount Pointer (mp1)machines:vm-109-disk--1,mp=/mnt,backup=1,size=21G

And another Single disk container:

Code:
2023-02-12 22:48:03 shutdown CT 111
2023-02-12 22:48:08 starting migration of CT 111 to node 'pve3' (192.168.xx.xxx)
  WARNING: Not using device /dev/sdb for PV vWQlhY-quYm-7R9P-uh7R-HV7k-ueho-DfegJH.
  WARNING: PV vWQlhY-quYm-7R9P-uh7R-HV7k-ueho-DfegJH prefers device /dev/sdc because device name matches previous.
  WARNING: Not using device /dev/sdb for PV vWQlhY-quYm-7R9P-uh7R-HV7k-ueho-DfegJH.
  WARNING: PV vWQlhY-quYm-7R9P-uh7R-HV7k-ueho-DfegJH prefers device /dev/sdc because device was seen first.
  Command failed with status code 5.
command '/sbin/vgscan --ignorelockingfailure --mknodes' failed: exit code 5
  WARNING: Not using device /dev/sdb for PV vWQlhY-quYm-7R9P-uh7R-HV7k-ueho-DfegJH.
  WARNING: PV vWQlhY-quYm-7R9P-uh7R-HV7k-ueho-DfegJH prefers device /dev/sdc because device name matches previous.
  WARNING: Not using device /dev/sdb for PV vWQlhY-quYm-7R9P-uh7R-HV7k-ueho-DfegJH.
  WARNING: PV vWQlhY-quYm-7R9P-uh7R-HV7k-ueho-DfegJH prefers device /dev/sdc because device was seen first.
  Command failed with status code 5.
command '/sbin/vgscan --ignorelockingfailure --mknodes' failed: exit code 5
2023-02-12 22:48:09 ERROR: no such logical volume pve-data/pve-vms
2023-02-12 22:48:09 aborting phase 1 - cleanup resources
2023-02-12 22:48:09 start final cleanup
2023-02-12 22:48:09 start container on source node
2023-02-12 22:48:10 ERROR: migration aborted (duration 00:00:08): no such logical volume pve-data/pve-vms
TASK ERROR: migration aborted

CT 111
Memory1GB
Swap512MB
Cores3
Root Diskmachines:vm-111-disk-0,size-20G

AFAICT
There is a pve-data/pve-vms LV on pve1... ( but that is not the host pve2/3. and that one is empty).L
limited with nodes pve1.

Those warnings did exist previously due to a iscsi disk that was exported through multiple interfaces from a NAS. (now fixed).
And or not currently visible.
There are two machines that need to move before rebuilding pve1 including a disk for ceph.
I need to find a way to migrate the pve-next volume (iscsi on NAS...) and vm-100 VM (a PBS).
Same issue the backup-store needs to be migrated as well.
(First i need to be able to migrate between pve2/pve3).

Code:
pve1:
  LV            VG        Attr       LSize    Pool    Origin Data%  Meta%  Move Log Cpy%Sync Convert
  mysql         dbdata-vg -wi-a-----   12.00g                                                   
  backup-store  pbs-data  -wi-a-----    3.00t                                                   
  ceph          pbs-data  -wi-a-----    1.00t                                                   
  PBS           pbs-new   -wi-------    1.00t                                                   
  data          pve       twi-aotz-- <147.38g                0.00   1.08                        
  root          pve       -wi-ao----   58.00g                                                   
  swap          pve       -wi-ao----    8.00g                                                   
  pve-iso       pve-data  -wi-ao----  100.00g                                                   
  pve-vms       pve-data  twi-aotz--  500.00g                1.43   4.23                        
  vm-100-disk-0 pve-data  Vwi-a-tz--   20.00g pve-vms        35.83                              
  vm-115-disk-0 pve-next  -wi-a-----  500.00g                                                   

pve2:
  LV                         VG                   Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  osd-block-9d7cf733-ef16-   ceph-99ab4859-2346-  -wi-ao---- <1.82t
  data                       pve                  twi-aotz--  1.67t             0.01   0.15
  root                       pve                  -wi-ao---- 96.00g
  swap                       pve                  -wi-ao----  8.00g
  vm-101-disk-0              pve                  Vwi-a-tz-- 16.00g data        0.00

pve3:
  LV                         VG                    Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  osd-block-2199661e-3587-   ceph-f6411b55-eb18-   -wi-ao---- <1.82t
  data                       pve                   twi-a-tz--  1.67t             0.00   0.15
  root                       pve                   -wi-ao---- 96.00g
  swap                       pve                   -wi-ao----  8.00g

Again the systems migrated succesful from pve1 where those filesystems did exist, why can't they migrate from pve2<->pve3
The error 5 only occurs on pve1
 
Last edited:
Hi,
how does your /etc/pve/storage.cfg look like? Is the storage that references pve-data/pve-vms restricted to be only available on node pve1? You can edit the restriction in Datacenter > Storage > Edit > Nodes.
 
awaiting answers i investigated further.
pve-data/pve-vms is on pve1, THERE it produces the error 5...
Even though all pve1 storages were restricted to pve1 node only.

On the other nodes there is no such volume. The problem was migrating from pve2 -> pve3 so there could be no use of pve-data / pve-vms on those nodes...

In the mean time the pve1 has been erased and newly setup, including ceph disk. so all data is now shared available.
The original issue is not reproducible any more.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!