Problems witch migrate back the vm.

pumbixony

New Member
Oct 31, 2023
7
0
1
Hello,

I make a ceph and pools for this ceph in claster. Next to from the first one I migrate VM to the second one, is working ok, but when I want migrate back i have problem like this:





2023-10-31 10:47:47 starting migration of VM 103 to node 'FMHprox1' (x)
2023-10-31 10:47:47 ERROR: Problem found while scanning volumes - storage 'DATA2' is not available on node 'FMHprox1'
2023-10-31 10:47:47 aborting phase 1 - cleanup resources
2023-10-31 10:47:47 ERROR: migration aborted (duration 00:00:01): Problem found while scanning volumes - storage 'DATA2' is not available on node 'FMHprox1'
TASK ERROR: migration aborted
 
Last edited:
root@FMHprox02:~# qm config 103
boot: order=scsi0;net0
cores: 1
memory: 1024
meta: creation-qemu=7.2.0,ctime=1687942824
name: Apache89
net0: virtio=CE:CF:78:21:80:60,bridge=vmbr0,firewall=1
numa: 0
ostype: l26
scsi0: pmoxpool01:vm-103-disk-0,iothread=1,size=10G
scsihw: virtio-scsi-single
smbios1: uuid=08639fb6-f290-498a-b744-dd606b08b54a
sockets: 1
vmgenid: 81192168-3a1a-4f64-9139-c84be8ba8895

root@FMHprox02:~# pvesm status
mount error: mount.nfs: access denied by server while mounting 192.168.1.186:/volume1/Proxmox_datastore1
Name Type Status Total Used Available %
DATA1 lvm disabled 0 0 0 N/A
DATA2 lvm active 1953513472 52428800 1901084672 2.68%
local dir active 233665408 1607168 232058240 0.69%
pmoxpool01 rbd active 1854783513 117884697 1736898816 6.36%
synology-nas-storage nfs inactive 0 0 0 0.00%

root@FMHprox02:~# cat /etc/pve/storage.cfg
dir: local
path /var/lib/vz
content rootdir,images,vztmpl,iso,snippets,backup
shared 0

lvm: DATA1
vgname DATA1
content rootdir,images
nodes FMHprox
shared 0

nfs: synology-nas-storage
export /volume1/Proxmox_datastore1
path /mnt/pve/synology-nas-storage
server 192.168.1.186
content rootdir,vztmpl,images,iso,snippets,backup
options vers=3
prune-backups keep-all=1

lvm: DATA2
vgname DATA2
content rootdir,images
nodes FMHprox02
shared 0

rbd: pmoxpool01
content images,rootdir
krbd 0
pool pmoxpool01
 
Hi,
please check if you have an orphaned disk belonging to the VM on the DATA2 storage, e.g. use lvs or to have it show up as an unused disk in the VM configuration, qm rescan --vmid 103. In Proxmox VE 7, such disks are still picked up automatically for migration, in Proxmox VE 8 not anymore, because it causes confusion in situations like this.
 
Hi,
please check if you have an orphaned disk belonging to the VM on the DATA2 storage, e.g. use lvs or to have it show up as an unused disk in the VM configuration, qm rescan --vmid 103. In Proxmox VE 7, such disks are still picked up automatically for migration, in Proxmox VE 8 not anymore, because it causes confusion in situations like this.
Can you explain exactly what I need to do?

I don't really understand
 
Can you explain exactly what I need to do?
You can use the command lvs to list the logical volumes on the source node. In Proxmox VE 7, migration will attempt to migrate any disk with the VM's ID even if not listed in the VM configuration. If there is such a disk you might want to remove or rename it to make migration work.

Alternatively, you can use the command qm rescan --vmid 103 to have such disks show up in the VM configuration as unused disks. Then you can move the disk to a different storage to make migration work.
 
You can use the command lvs to list the logical volumes on the source node. In Proxmox VE 7, migration will attempt to migrate any disk with the VM's ID even if not listed in the VM configuration. If there is such a disk you might want to remove or rename it to make migration work.

Alternatively, you can use the command qm rescan --vmid 103 to have such disks show up in the VM configuration as unused disks. Then you can move the disk to a different storage to make migration work.
root@FMHprox02:~# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
vm-101-disk-0 DATA2 -wi-a----- 20.00g
vm-101-disk-1 DATA2 -wi-a----- 20.00g
vm-103-disk-0 DATA2 -wi-a----- 10.00g
osd-block-173031f0-a76f-43d6-bc2d-f409e67c5a1d ceph-6ca7815f-81f6-4246-856c-04d3b4b17d95 -wi-ao---- <1.82t

root@FMHprox02:~# qm rescan --vmid103
Unknown option: vmid103
400 unable to parse option
qm disk rescan [OPTIONS]
 
root@FMHprox02:~# qm rescan --vmid103
Unknown option: vmid103
400 unable to parse option
qm disk rescan [OPTIONS]
A space is missing: qm rescan --vmid 103
 
A space is missing: qm rescan --vmid 103
root@FMHprox02:~# qm rescan --vmid 103
rescan volumes...
mount error: mount.nfs: access denied by server while mounting 192.168.1.186:/volume1/Proxmox_datastore1
root@FMHprox02:~#
 
root@FMHprox02:~# qm rescan --vmid 103
rescan volumes...
mount error: mount.nfs: access denied by server while mounting 192.168.1.186:/volume1/Proxmox_datastore1
root@FMHprox02:~#
Seems like an issue with the NFS access rights. If the specific node isn't suppose to access it, you can use the nodes argument in the storage configuration to limit it to other nodes.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!