[SOLVED] Migrating VMs on shared storage is trying to copy

jeefs

New Member
May 19, 2023
9
0
1
I'm working on replacing my single standalone node with a new one. I created a cluster with both the old node and the new one. All vms are stored on a separate nas using lvm over iscsi. The goal is to decom the old host so I'm trying to migrate all the vms over which should be quick since its on shared storage however proxmox is trying to copy the vms from the iscsi lvm to old node (node1) to the iscsi lvm on the new node (node2).



The migration log shows the following

2024-06-21 12:10:04 use dedicated network address for sending migration traffic (192.168.30.1)
2024-06-21 12:10:04 starting migration of CT 105 to node 'ProxmoxMiniNode01' (192.168.30.1)
2024-06-21 12:10:04 found local volume 'qnap-prod-iscsi-lun:vm-105-disk-0' (in current VM config)
2024-06-21 12:10:06 volume qnap-prod-iscsi-lun/vm-105-disk-0 already exists - importing with a different name
2024-06-21 12:10:06 Logical volume "vm-105-disk-1" created.
2024-06-21 12:10:10 515178496 bytes (515 MB, 491 MiB) copied, 3 s, 172 MB/s
2024-06-21 12:10:13 1065156608 bytes (1.1 GB, 1016 MiB) copied, 6 s, 177 MB/s
2024-06-21 12:10:16 1829961728 bytes (1.8 GB, 1.7 GiB) copied, 9 s, 203 MB/s
2024-06-21 12:10:20 2308767744 bytes (2.3 GB, 2.2 GiB) copied, 12 s, 186 MB/s
2024-06-21 12:10:25 2594766848 bytes (2.6 GB, 2.4 GiB) copied, 18 s, 144 MB/s
2024-06-21 12:10:28 3382116352 bytes (3.4 GB, 3.1 GiB) copied, 21 s, 161 MB/s
2024-06-21 12:10:31 4177330176 bytes (4.2 GB, 3.9 GiB) copied, 24 s, 174 MB/s
2024-06-21 12:10:34 4872732672 bytes (4.9 GB, 4.5 GiB) copied, 27 s, 180 MB/s
2024-06-21 12:10:37 5688000512 bytes (5.7 GB, 5.3 GiB) copied, 30 s, 190 MB/s
2024-06-21 12:10:40 6456410112 bytes (6.5 GB, 6.0 GiB) copied, 33 s, 196 MB/s
2024-06-21 12:10:43 7265517568 bytes (7.3 GB, 6.8 GiB) copied, 36 s, 202 MB/s
2024-06-21 12:10:46 8078032896 bytes (8.1 GB, 7.5 GiB) copied, 39 s, 207 MB/s
2024-06-21 12:10:48 131072+0 records in
2024-06-21 12:10:48 131072+0 records out
2024-06-21 12:10:48 8589934592 bytes (8.6 GB, 8.0 GiB) copied, 42.2037 s, 204 MB/s
2024-06-21 12:11:05 7113+247918 records in
2024-06-21 12:11:05 7113+247918 records out
2024-06-21 12:11:05 8589934592 bytes (8.6 GB, 8.0 GiB) copied, 59.0888 s, 145 MB/s
2024-06-21 12:11:05 successfully imported 'qnap-prod-iscsi-lun:vm-105-disk-1'
2024-06-21 12:11:05 volume 'qnap-prod-iscsi-lun:vm-105-disk-0' is 'qnap-prod-iscsi-lun:vm-105-disk-1' on the target
2024-06-21 12:11:05 # /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=ProxmoxMiniNode01' -o 'UserKnownHostsFile=/etc/pve/nodes/ProxmoxMiniNode01/ssh_known_hosts' -o 'GlobalKnownHostsFile=none' root@192.168.30.1 pvesr set-state 105 \''{}'\'
Logical volume "vm-105-disk-0" successfully removed.
2024-06-21 12:11:06 start final cleanup
2024-06-21 12:11:07 migration finished successfully (duration 00:01:04)
TASK OK

Since this is a small lxc for the example I don't mind coping but i have a vm that uses 20TB and i don't have the space for it to copy to the same storage. I thought about just moving the config files in /etc/nodes from one host to the other but when i eventually add additional nodes in the future I'd like to avoid having this issue
Any ideas?
 
Last edited:
Can confirm moving the cfg file for vms and containers from /etc/nodes/node1/lxc to /etc/nodes/node2/lxc works as expected. Started up lxc afterwards and no issues
 
2024-06-21 12:10:04 found local volume 'qnap-prod-iscsi-lun:vm-105-disk-0' (in current VM config)
2024-06-21 12:10:06 volume qnap-prod-iscsi-lun/vm-105-disk-0 already exists - importing with a different name
2024-06-21 12:10:06 Logical volume "vm-105-disk-1" created.
Hi @jeefs,
No one else has recently reported a similar issue and many people use iSCSI/LVM. It's more likely than not that the issue is unique to your environment.
I'd recommend that you review your configuration as well as the disk layout. If you are not seeing anything out of the ordinary, please provide it here:

- cat /etc/pve/storage.cfg
- each node: lsscsi
- each node: lsblk
- each node: pvs;vgs;lvs
- qm config [vmid_in_question]
- each node: pvesm status


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
  • Like
Reactions: Spaneta
If PVE copy VM's disk it may mean it consider the storage on node1 and node2 as 2 distinct storage although you know it is the same.
Can you ensure in "Datacenter" -> "Storage" there is only 1 iSCSI target and that "All" is selected in the "Nodes" properties of it ?
Have you added iSCSI storage to your new node2 when it was standalone THEN add node2 to node1 cluster ?

Have a nice day,
 
@Spaneta I can confirm there is only 1 ISCSI target and that it is set to all. I did not add the ISCSI storage to the node. Once added to the cluster the node auto added the storage based on my config

@bbgeek17 I just reinstalled proxmox on all nodes and the issue is still present. I built a new cluster, added the nodes and storage to the cluster and the same issue is present. I didn't see anything out of the ordinary especailly since this is new installs so I'll include the command outputs. See below for outputs

/etc/pve/storage.cfg I'll write descriptions next to each

dir: local
disable
path /var/lib/vz
content iso,vztmpl,backup
shared 0

nfs: qnap-backup (qnap backup server. No issues)
export /qnap-backup
path /mnt/pve/qnap-backup
server 192.168.30.10
content backup
prune-backups keep-all=1

nfs: qnap-prod-util (qnap prod server. Utility drive iso, templates, etc)
export /QNAP-Prod
path /mnt/pve/qnap-prod-util
server 192.168.30.8
content vztmpl,iso
prune-backups keep-all=1

iscsi: qnap-prod-iscsi (qnap prod server. iscsi for vm storage)
portal 192.168.30.8
target iqn.2004-04.com.qnap:ts-932px:iscsi.target-1.67dd2f
content none

lvm: qnap-prod-iscsi-lun (lvm for qnap prod vm storage. This is where all vms are stored)
vgname qnap-prod-iscsi-lun
base qnap-prod-iscsi:0.0.0.scsi-36e843b63b25bd6bd63fbd4d95dbda0d0
content images,rootdir
saferemove 0
shared 0

I'll attach the individual nodes as attachments since i hit the character limit. NOTE There are only currently 2 nodes. 1 node has 2 votes the other 1. This is because since I destroyed cluster and reinstalled I decided to replace one of the nodes so its on order and I gave node 1 an extra vote until the new one shows up

Finally. Since this happens to all vms and lxc here is just one example for qmconfig

qm config 129
boot: order=scsi0;net0
cores: 2
cpu: x86-64-v2-AES
memory: 4096
meta: creation-qemu=8.1.2,ctime=1706582155
name: nginxproxymanager
net0: virtio=BC:24:11:F7:CE:AF,bridge=vmbr1,firewall=1,tag=35
numa: 0
ostype: l26
scsi0: qnap-prod-iscsi-lun:vm-129-disk-0,iothread=1,size=32G
scsihw: virtio-scsi-single
smbios1: uuid=f03215db-1c60-41bc-95fd-79e91c0f8609
sockets: 1
vmgenid: df8833d3-916d-4fd6-8ab8-c03fadb1b6a1
 

Attachments

The man page for "pvesm" (pve storage management) says:
Code:
--shared <boolean>
           Indicate that this is a single storage with the same contents on all nodes (or all listed in the nodes option). It will not make the contents of a local storage automatically accessible to other nodes, it just marks
           an already shared storage as such!

A Boolean in computer science is a true/false variable. By convention, 1 is true, and 0 is false.
The storage you consider "shared" is marked as "shared=false" in your configuration. It, therefore, makes sense that a full copy occurs.

Good luck



Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
@bbgeek17 oh man I can't believe I didn't see that. I just assumed since the lvm was showing on all nodes shared was already selected. Even when pulling those commands I didn't think twice about it. Can confirm that 100% solved it
 
The storage pool will show up on all nodes, unless you scope it to a particular node with "nodes xxx" attribute.
After all, you have "local" storage. It is present on each node, but its not the same across all nodes. Each one is unique/local and its not shared.



Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!