Migration of VM

Klekele

Member
Jul 6, 2022
17
0
6
Hi,

ive searched the forum but i didn't find a good question for my questions.. so my question is I have a cluster with few nodes and two of the nodes have NVME, so I created a VM on node1 with one local disk + nvme disk. When I try to migrate it to node2 I cannot do it, because on node2 I cannot add the same nvme id as on the first node, so what's the best practice to migrate a VM with two local disks to another node?

How I added nvme datastore:

1. datacenter > storage > add > lvm-think and I added it as nvme
2. datacenter > storage > add > lvm-think and I added it as nvme-node2 (because I got an error that nvme id is already in use)

thanks for your answers.
 
The storage config is cluster-wide.
So you have to make sure, that the underlaying storage you want to use migration with is set up and named respectively mounted identical. If this is the case, it is sufficient to have only one storage-entry for this storage on PVE and restrict it to the corresponding nodes.

Please provide the full output in code-tags of: cat /etc/pve/storage.cfg
 
  • Like
Reactions: fabian
output of storage.cfg:


dir: local
path /var/lib/vz
content images,vztmpl,snippets,backup,iso
shared 0

lvmthin: local-lvm
thinpool data
vgname pve
content images,rootdir

nfs: nas-px1
export /home/px/px1
path /mnt/pve/nas-px1
server IP
content backup
prune-backups keep-last=24

nfs: nas-px
export /home/px/px
path /mnt/pve/nas-px
server IP
content images,backup
prune-backups keep-last=24

nfs: nas-px2
export /home/px/px2
path /mnt/pve/nas-px2
server IP
content images,backup
nodes px2
prune-backups keep-last=14

nfs: nas-px3
export /home/px/px3
path /mnt/pve/nas-px3
server IP
content images,backup
nodes px3
prune-backups keep-last=14

lvmthin: nvme
thinpool nvme
vgname nvme
content rootdir,images
nodes px4

nfs: nas-px4
export /home/px/px4
path /mnt/pve/nas-px4
server IP
content backup,images
nodes px4
prune-backups keep-all=1

pbs: proxmox-bkp
datastore proxmox-bkp
server IP
content backup
fingerprint aa.......
prune-backups keep-all=1
username root@pam

lvmthin: nvme-px5
thinpool nvme
vgname nvme
content images,rootdir
nodes px5
 
lvmthin: nvme
thinpool nvme
vgname nvme
content rootdir,images
nodes px4
lvmthin: nvme-px5
thinpool nvme
vgname nvme
content images,rootdir
nodes px5

Since thinpool and vgname are the same on both nodes, remove lvmthin: nvme-px5 from the storage configuration and add px5 as a node to lvmthin: nvme. (All via the GUI.)

Make sure, that no guests are already using/referencing lvmthin: nvme-px5!
 
But this are local nvme, so one nvme storage is on px4 and one si on px5, and I need to move a VM from px4 to px5
storage configuration of VM that is on px4

virtio0: local-lvm:vm-201-disk-0,iothread=1,size=700G
virtio1: nvme:vm-201-disk-0,size=50G
 
Last edited:
It is the same as with your: lvmthin: local-lvm; you also have only one entry for it, although I highly assume that you have it, on a physical base, in/on all of your nodes, since you did not restrict it to one specific node, no?
 
sorry, but im totally confused now :) so i there a way that i have local nvmes on px4 and px5 as NVME, so that i can easily do live migration between this two nodes?
 
Ok ill try to write it again, maybe I was not clear...

I have two nods:

px4 with:

Screenshot 2022-12-27 at 11.31.20.png

and px5 with:

Screenshot 2022-12-27 at 11.31.16.png

The first created node was px4 where I needed to add nvme storage in datacenter > storage > lvm-thin and I named that storage as nvme, than I bought px5 and I tried to add nvme storage as nvme in datacenter > storage > lvm-thin, but I cannot add the same name.

So I want to move a VM from px4 that has a disk create on this /dev/sda storage and disk created on /dev/nvme0n1 to px5, but as px5 doesn't have the same storage name i cannot migrate it i can only migrate this /dev/sda storage, because the name is the same on px4 and px5.

hope it is clearer.
 
I did understand it the first time and already gave you the solution...

than I bought px5 and I tried to add nvme storage as nvme in datacenter > storage > lvm-thin, but I cannot add the same name.

This is why you do not add an additional storage for it, instead you edit your: lvmthin: nvme storage and add: px5 as a node to it.

If this still does not make it clear to you, someone other has to explain it, sorry.
 
Migrations works yes... but nvme disk stays on px4 local nvme, so do i now go to VM > hardware > choose nvme and move it to nvme-px5?
 
Can you please post the full task-log of that migration in code-tags?
 
Code:
Header
Proxmox
Virtual Environment 7.1-11
Search
Virtual Machine 122 (test-nvme) on node 'px5'
Server View
Logs
()
2022-12-27 12:29:02 starting migration of VM 122 to node 'px5' (IP)
2022-12-27 12:29:02 found local disk 'local:122/vm-122-disk-0.qcow2' (in current VM config)
2022-12-27 12:29:02 found local disk 'nvme:vm-122-disk-0' (in current VM config)
2022-12-27 12:29:02 starting VM 122 on remote node 'px5'
2022-12-27 12:29:05 volume 'nvme:vm-122-disk-0' is 'nvme:vm-122-disk-0' on the target
2022-12-27 12:29:05 volume 'local:122/vm-122-disk-0.qcow2' is 'local:122/vm-122-disk-0.qcow2' on the target
2022-12-27 12:29:05 start remote tunnel
2022-12-27 12:29:06 ssh tunnel ver 1
2022-12-27 12:29:06 starting storage migration
2022-12-27 12:29:06 scsi0: start migration to nbd:unix:/run/qemu-server/122_nbd.migrate:exportname=drive-scsi0
drive mirror is starting for drive-scsi0
drive-scsi0: transferred 0.0 B of 10.0 GiB (0.00%) in 0s
drive-scsi0: transferred 98.0 MiB of 10.0 GiB (0.96%) in 1s
drive-scsi0: transferred 210.0 MiB of 10.0 GiB (2.05%) in 2s
drive-scsi0: transferred 321.0 MiB of 10.0 GiB (3.13%) in 3s
drive-scsi0: transferred 431.0 MiB of 10.0 GiB (4.21%) in 4s
drive-scsi0: transferred 546.0 MiB of 10.0 GiB (5.33%) in 5s
drive-scsi0: transferred 658.0 MiB of 10.0 GiB (6.43%) in 6s
drive-scsi0: transferred 763.0 MiB of 10.0 GiB (7.45%) in 7s
drive-scsi0: transferred 871.0 MiB of 10.0 GiB (8.51%) in 8s
drive-scsi0: transferred 981.0 MiB of 10.0 GiB (9.58%) in 9s
drive-scsi0: transferred 1.1 GiB of 10.0 GiB (10.78%) in 10s
drive-scsi0: transferred 1.2 GiB of 10.0 GiB (11.78%) in 11s
drive-scsi0: transferred 1.3 GiB of 10.0 GiB (12.86%) in 12s
drive-scsi0: transferred 1.4 GiB of 10.0 GiB (13.88%) in 13s
drive-scsi0: transferred 1.5 GiB of 10.0 GiB (14.91%) in 14s
drive-scsi0: transferred 1.6 GiB of 10.0 GiB (15.99%) in 15s
drive-scsi0: transferred 1.7 GiB of 10.0 GiB (17.08%) in 16s
drive-scsi0: transferred 1.8 GiB of 10.0 GiB (18.12%) in 17s
drive-scsi0: transferred 1.9 GiB of 10.0 GiB (19.14%) in 18s
drive-scsi0: transferred 2.0 GiB of 10.0 GiB (20.22%) in 19s
drive-scsi0: transferred 2.1 GiB of 10.0 GiB (21.31%) in 20s
drive-scsi0: transferred 2.2 GiB of 10.0 GiB (22.38%) in 21s
drive-scsi0: transferred 2.3 GiB of 10.0 GiB (23.48%) in 22s
drive-scsi0: transferred 2.5 GiB of 10.0 GiB (24.54%) in 23s
drive-scsi0: transferred 2.6 GiB of 10.0 GiB (25.61%) in 24s
drive-scsi0: transferred 2.7 GiB of 10.0 GiB (26.67%) in 25s
drive-scsi0: transferred 2.8 GiB of 10.0 GiB (27.75%) in 26s
drive-scsi0: transferred 2.9 GiB of 10.0 GiB (28.84%) in 27s
drive-scsi0: transferred 3.0 GiB of 10.0 GiB (29.93%) in 28s
drive-scsi0: transferred 3.1 GiB of 10.0 GiB (30.99%) in 29s
drive-scsi0: transferred 3.2 GiB of 10.0 GiB (32.07%) in 30s
drive-scsi0: transferred 3.3 GiB of 10.0 GiB (33.16%) in 31s
drive-scsi0: transferred 3.4 GiB of 10.0 GiB (34.24%) in 32s
drive-scsi0: transferred 3.5 GiB of 10.0 GiB (35.31%) in 33s
drive-scsi0: transferred 3.6 GiB of 10.0 GiB (36.35%) in 34s
drive-scsi0: transferred 3.7 GiB of 10.0 GiB (37.43%) in 35s
drive-scsi0: transferred 3.9 GiB of 10.0 GiB (38.59%) in 36s
drive-scsi0: transferred 4.0 GiB of 10.0 GiB (39.55%) in 37s
drive-scsi0: transferred 4.1 GiB of 10.0 GiB (40.64%) in 38s
drive-scsi0: transferred 4.2 GiB of 10.0 GiB (41.75%) in 39s
drive-scsi0: transferred 4.3 GiB of 10.0 GiB (42.84%) in 40s
drive-scsi0: transferred 4.4 GiB of 10.0 GiB (43.92%) in 41s
drive-scsi0: transferred 4.5 GiB of 10.0 GiB (45.00%) in 42s
drive-scsi0: transferred 4.6 GiB of 10.0 GiB (46.05%) in 43s
drive-scsi0: transferred 4.7 GiB of 10.0 GiB (47.16%) in 44s
drive-scsi0: transferred 4.8 GiB of 10.0 GiB (48.21%) in 45s
drive-scsi0: transferred 4.9 GiB of 10.0 GiB (49.30%) in 46s
drive-scsi0: transferred 5.0 GiB of 10.0 GiB (50.40%) in 47s
drive-scsi0: transferred 5.1 GiB of 10.0 GiB (51.46%) in 48s
drive-scsi0: transferred 5.3 GiB of 10.0 GiB (52.53%) in 49s
drive-scsi0: transferred 5.4 GiB of 10.0 GiB (53.60%) in 50s
drive-scsi0: transferred 5.5 GiB of 10.0 GiB (54.69%) in 51s
drive-scsi0: transferred 5.6 GiB of 10.0 GiB (55.63%) in 52s
drive-scsi0: transferred 5.7 GiB of 10.0 GiB (56.69%) in 53s
drive-scsi0: transferred 5.8 GiB of 10.0 GiB (57.76%) in 54s
drive-scsi0: transferred 5.9 GiB of 10.0 GiB (58.85%) in 55s
drive-scsi0: transferred 6.0 GiB of 10.0 GiB (59.94%) in 56s
drive-scsi0: transferred 6.1 GiB of 10.0 GiB (61.03%) in 57s
drive-scsi0: transferred 6.2 GiB of 10.0 GiB (62.11%) in 58s
drive-scsi0: transferred 6.3 GiB of 10.0 GiB (63.19%) in 59s
drive-scsi0: transferred 6.4 GiB of 10.0 GiB (64.28%) in 1m
drive-scsi0: transferred 6.5 GiB of 10.0 GiB (65.39%) in 1m 1s
drive-scsi0: transferred 6.6 GiB of 10.0 GiB (66.41%) in 1m 2s
drive-scsi0: transferred 6.8 GiB of 10.0 GiB (67.50%) in 1m 3s
drive-scsi0: transferred 6.9 GiB of 10.0 GiB (68.59%) in 1m 4s
drive-scsi0: transferred 7.0 GiB of 10.0 GiB (69.69%) in 1m 5s
drive-scsi0: transferred 7.1 GiB of 10.0 GiB (70.78%) in 1m 6s
drive-scsi0: transferred 7.2 GiB of 10.0 GiB (71.88%) in 1m 7s
drive-scsi0: transferred 7.3 GiB of 10.0 GiB (72.97%) in 1m 8s
drive-scsi0: transferred 7.4 GiB of 10.0 GiB (74.02%) in 1m 9s
drive-scsi0: transferred 7.5 GiB of 10.0 GiB (75.11%) in 1m 10s
drive-scsi0: transferred 7.6 GiB of 10.0 GiB (76.20%) in 1m 11s
drive-scsi0: transferred 7.7 GiB of 10.0 GiB (77.28%) in 1m 12s
drive-scsi0: transferred 7.8 GiB of 10.0 GiB (78.35%) in 1m 13s
drive-scsi0: transferred 7.9 GiB of 10.0 GiB (79.47%) in 1m 14s
drive-scsi0: transferred 8.1 GiB of 10.0 GiB (80.54%) in 1m 15s
drive-scsi0: transferred 8.2 GiB of 10.0 GiB (81.64%) in 1m 16s
drive-scsi0: transferred 8.3 GiB of 10.0 GiB (82.72%) in 1m 17s
drive-scsi0: transferred 8.4 GiB of 10.0 GiB (83.81%) in 1m 18s
drive-scsi0: transferred 8.5 GiB of 10.0 GiB (84.89%) in 1m 19s
drive-scsi0: transferred 8.6 GiB of 10.0 GiB (85.99%) in 1m 20s
drive-scsi0: transferred 8.7 GiB of 10.0 GiB (87.08%) in 1m 21s
drive-scsi0: transferred 8.8 GiB of 10.0 GiB (88.17%) in 1m 22s
drive-scsi0: transferred 8.9 GiB of 10.0 GiB (89.27%) in 1m 23s
drive-scsi0: transferred 9.0 GiB of 10.0 GiB (90.37%) in 1m 24s
drive-scsi0: transferred 9.1 GiB of 10.0 GiB (91.46%) in 1m 25s
drive-scsi0: transferred 9.3 GiB of 10.0 GiB (92.55%) in 1m 26s
drive-scsi0: transferred 9.4 GiB of 10.0 GiB (94.06%) in 1m 28s
drive-scsi0: transferred 9.5 GiB of 10.0 GiB (95.04%) in 1m 29s
drive-scsi0: transferred 9.6 GiB of 10.0 GiB (96.00%) in 1m 30s
drive-scsi0: transferred 9.7 GiB of 10.0 GiB (97.09%) in 1m 31s
drive-scsi0: transferred 9.8 GiB of 10.0 GiB (98.20%) in 1m 32s
drive-scsi0: transferred 9.9 GiB of 10.0 GiB (99.30%) in 1m 33s
drive-scsi0: transferred 10.0 GiB of 10.0 GiB (100.00%) in 1m 34s, ready
all 'mirror' jobs are ready
2022-12-27 12:30:40 scsi1: start migration to nbd:unix:/run/qemu-server/122_nbd.migrate:exportname=drive-scsi1
drive mirror is starting for drive-scsi1
drive-scsi1: transferred 0.0 B of 10.0 GiB (0.00%) in 0s
drive-scsi1: transferred 104.0 MiB of 10.0 GiB (1.02%) in 1s
drive-scsi1: transferred 209.0 MiB of 10.0 GiB (2.04%) in 2s
drive-scsi1: transferred 327.0 MiB of 10.0 GiB (3.19%) in 3s
drive-scsi1: transferred 431.0 MiB of 10.0 GiB (4.21%) in 4s
drive-scsi1: transferred 550.0 MiB of 10.0 GiB (5.37%) in 5s
drive-scsi1: transferred 655.0 MiB of 10.0 GiB (6.40%) in 6s
drive-scsi1: transferred 774.0 MiB of 10.0 GiB (7.56%) in 7s
drive-scsi1: transferred 879.0 MiB of 10.0 GiB (8.58%) in 8s
drive-scsi1: transferred 997.0 MiB of 10.0 GiB (9.74%) in 9s
drive-scsi1: transferred 1.1 GiB of 10.0 GiB (10.78%) in 10s
drive-scsi1: transferred 1.2 GiB of 10.0 GiB (11.81%) in 11s
drive-scsi1: transferred 1.3 GiB of 10.0 GiB (12.95%) in 12s
drive-scsi1: transferred 1.4 GiB of 10.0 GiB (14.10%) in 13s
drive-scsi1: transferred 1.5 GiB of 10.0 GiB (15.13%) in 14s
drive-scsi1: transferred 1.6 GiB of 10.0 GiB (16.16%) in 15s
drive-scsi1: transferred 1.7 GiB of 10.0 GiB (17.19%) in 16s
drive-scsi1: transferred 1.8 GiB of 10.0 GiB (18.33%) in 17s
drive-scsi1: transferred 1.9 GiB of 10.0 GiB (19.36%) in 18s
drive-scsi1: transferred 2.0 GiB of 10.0 GiB (20.50%) in 19s
drive-scsi1: transferred 2.2 GiB of 10.0 GiB (21.50%) in 20s
drive-scsi1: transferred 2.3 GiB of 10.0 GiB (22.65%) in 21s
drive-scsi1: transferred 2.4 GiB of 10.0 GiB (23.66%) in 22s
drive-scsi1: transferred 2.5 GiB of 10.0 GiB (24.80%) in 23s
drive-scsi1: transferred 2.6 GiB of 10.0 GiB (25.83%) in 24s
drive-scsi1: transferred 2.7 GiB of 10.0 GiB (26.86%) in 25s
drive-scsi1: transferred 2.8 GiB of 10.0 GiB (28.00%) in 26s
drive-scsi1: transferred 2.9 GiB of 10.0 GiB (29.01%) in 27s
drive-scsi1: transferred 3.0 GiB of 10.0 GiB (30.04%) in 28s
drive-scsi1: transferred 3.1 GiB of 10.0 GiB (31.06%) in 29s
drive-scsi1: transferred 3.2 GiB of 10.0 GiB (32.09%) in 30s
drive-scsi1: transferred 3.3 GiB of 10.0 GiB (33.12%) in 31s
drive-scsi1: transferred 3.4 GiB of 10.0 GiB (34.16%) in 32s
drive-scsi1: transferred 3.5 GiB of 10.0 GiB (35.07%) in 33s
drive-scsi1: transferred 3.6 GiB of 10.0 GiB (35.96%) in 34s
drive-scsi1: transferred 3.7 GiB of 10.0 GiB (36.86%) in 35s
drive-scsi1: transferred 3.8 GiB of 10.0 GiB (38.02%) in 36s
drive-scsi1: transferred 3.9 GiB of 10.0 GiB (39.05%) in 37s
drive-scsi1: transferred 4.0 GiB of 10.0 GiB (40.08%) in 38s
drive-scsi1: transferred 4.1 GiB of 10.0 GiB (41.09%) in 39s
drive-scsi1: transferred 4.2 GiB of 10.0 GiB (42.12%) in 40s
drive-scsi1: transferred 4.3 GiB of 10.0 GiB (43.14%) in 41s
drive-scsi1: transferred 4.4 GiB of 10.0 GiB (44.29%) in 42s
drive-scsi1: transferred 4.5 GiB of 10.0 GiB (45.32%) in 43s
drive-scsi1: transferred 4.6 GiB of 10.0 GiB (46.34%) in 44s
drive-scsi1: transferred 4.8 GiB of 10.0 GiB (47.81%) in 45s
drive-scsi1: transferred 4.9 GiB of 10.0 GiB (48.77%) in 46s
drive-scsi1: transferred 5.0 GiB of 10.0 GiB (49.91%) in 47s
drive-scsi1: transferred 5.1 GiB of 10.0 GiB (50.96%) in 48s
drive-scsi1: transferred 5.2 GiB of 10.0 GiB (51.97%) in 49s
drive-scsi1: transferred 5.3 GiB of 10.0 GiB (53.02%) in 50s
drive-scsi1: transferred 5.4 GiB of 10.0 GiB (54.20%) in 51s
drive-scsi1: transferred 5.5 GiB of 10.0 GiB (55.21%) in 52s
drive-scsi1: transferred 5.6 GiB of 10.0 GiB (56.25%) in 53s
drive-scsi1: transferred 5.7 GiB of 10.0 GiB (57.40%) in 54s
drive-scsi1: transferred 5.8 GiB of 10.0 GiB (58.42%) in 55s
drive-scsi1: transferred 6.0 GiB of 10.0 GiB (59.57%) in 56s
drive-scsi1: transferred 6.1 GiB of 10.0 GiB (60.60%) in 57s
drive-scsi1: transferred 6.2 GiB of 10.0 GiB (61.74%) in 58s
drive-scsi1: transferred 6.3 GiB of 10.0 GiB (62.75%) in 59s
drive-scsi1: transferred 6.4 GiB of 10.0 GiB (63.79%) in 1m
drive-scsi1: transferred 6.5 GiB of 10.0 GiB (64.95%) in 1m 1s
drive-scsi1: transferred 6.6 GiB of 10.0 GiB (65.99%) in 1m 3s
drive-scsi1: transferred 6.7 GiB of 10.0 GiB (67.10%) in 1m 4s
drive-scsi1: transferred 6.8 GiB of 10.0 GiB (68.15%) in 1m 5s
drive-scsi1: transferred 6.9 GiB of 10.0 GiB (69.20%) in 1m 6s
drive-scsi1: transferred 7.0 GiB of 10.0 GiB (70.34%) in 1m 7s
drive-scsi1: transferred 7.1 GiB of 10.0 GiB (71.37%) in 1m 8s
drive-scsi1: transferred 7.3 GiB of 10.0 GiB (72.52%) in 1m 9s
drive-scsi1: transferred 7.4 GiB of 10.0 GiB (73.55%) in 1m 10s
drive-scsi1: transferred 7.5 GiB of 10.0 GiB (74.71%) in 1m 11s
drive-scsi1: transferred 7.6 GiB of 10.0 GiB (75.75%) in 1m 12s
drive-scsi1: transferred 7.7 GiB of 10.0 GiB (76.79%) in 1m 13s
drive-scsi1: transferred 7.8 GiB of 10.0 GiB (77.93%) in 1m 14s
drive-scsi1: transferred 7.9 GiB of 10.0 GiB (78.96%) in 1m 15s
drive-scsi1: transferred 8.0 GiB of 10.0 GiB (79.98%) in 1m 16s
drive-scsi1: transferred 8.1 GiB of 10.0 GiB (81.12%) in 1m 17s
drive-scsi1: transferred 8.2 GiB of 10.0 GiB (82.15%) in 1m 18s
drive-scsi1: transferred 8.3 GiB of 10.0 GiB (83.32%) in 1m 19s
drive-scsi1: transferred 8.4 GiB of 10.0 GiB (84.36%) in 1m 20s
drive-scsi1: transferred 8.5 GiB of 10.0 GiB (85.38%) in 1m 21s
drive-scsi1: transferred 8.6 GiB of 10.0 GiB (86.41%) in 1m 22s
drive-scsi1: transferred 8.8 GiB of 10.0 GiB (87.57%) in 1m 23s
drive-scsi1: transferred 8.9 GiB of 10.0 GiB (88.59%) in 1m 24s
drive-scsi1: transferred 9.0 GiB of 10.0 GiB (89.62%) in 1m 25s
drive-scsi1: transferred 9.1 GiB of 10.0 GiB (90.63%) in 1m 26s
drive-scsi1: transferred 9.2 GiB of 10.0 GiB (91.66%) in 1m 27s
drive-scsi1: transferred 9.3 GiB of 10.0 GiB (92.69%) in 1m 28s
drive-scsi1: transferred 9.4 GiB of 10.0 GiB (93.85%) in 1m 29s
drive-scsi1: transferred 9.5 GiB of 10.0 GiB (94.87%) in 1m 30s
drive-scsi1: transferred 9.6 GiB of 10.0 GiB (95.78%) in 1m 31s
drive-scsi1: transferred 9.6 GiB of 10.0 GiB (96.42%) in 1m 32s
drive-scsi1: transferred 9.7 GiB of 10.0 GiB (97.45%) in 1m 33s
drive-scsi1: transferred 9.8 GiB of 10.0 GiB (98.47%) in 1m 34s
drive-scsi1: transferred 9.9 GiB of 10.0 GiB (99.49%) in 1m 35s
drive-scsi1: transferred 10.0 GiB of 10.0 GiB (100.00%) in 1m 36s, ready
all 'mirror' jobs are ready
2022-12-27 12:32:16 starting online/live migration on unix:/run/qemu-server/122.migrate
2022-12-27 12:32:16 set migration capabilities
2022-12-27 12:32:16 migration downtime limit: 100 ms
2022-12-27 12:32:16 migration cachesize: 256.0 MiB
2022-12-27 12:32:16 set migration parameters
2022-12-27 12:32:16 start migrate command to unix:/run/qemu-server/122.migrate
2022-12-27 12:32:17 average migration speed: 2.0 GiB/s - downtime 31 ms
2022-12-27 12:32:17 migration status: completed
all 'mirror' jobs are ready
drive-scsi0: Completing block job_id...
drive-scsi0: Completed successfully.
drive-scsi1: Completing block job_id...
drive-scsi1: Completed successfully.
drive-scsi0: mirror-job finished
drive-scsi1: mirror-job finished
2022-12-27 12:32:18 stopping NBD storage migration server on target.
  Logical volume "vm-122-disk-0" successfully removed
2022-12-27 12:32:22 migration finished successfully (duration 00:03:21)
TASK OK
 
Please provide the config of that VM: qm config 122

But to me, it looks fine.
This VM has two vdisks:
Bash:
2022-12-27 12:29:02 found local disk 'local:122/vm-122-disk-0.qcow2' (in current VM config)
2022-12-27 12:29:02 found local disk 'nvme:vm-122-disk-0' (in current VM config)
And two vdisks got transferred:
Bash:
drive-scsi0: transferred 10.0 GiB of 10.0 GiB (100.00%) in 1m 34s, ready
drive-scsi1: transferred 10.0 GiB of 10.0 GiB (100.00%) in 1m 36s, ready

Compare the "VM Disks" content of your: nvme storage on both nodes before and after the migration.
 
Code:
root@px5:~# qm config 122
boot: order=scsi0;ide2;net0
cores: 1
ide2: none,media=cdrom
memory: 2048
meta: creation-qemu=6.1.1,ctime=1672140522
name: test-nvme
net0: virtio=1E:06:26:15:63:16,bridge=vmbr0,firewall=1
numa: 0
ostype: l26
scsi0: nvme:vm-122-disk-0,size=10G
scsi1: local:122/vm-122-disk-0.qcow2,format=qcow2,size=10G
scsihw: virtio-scsi-pci
smbios1: uuid=c0b1e6fb-d3e9-4198-b6e6-16ef57bc1e65
sockets: 1
unused0: nvme-px5:vm-122-disk-1
vmgenid: f788c7b9-a38c-4899-8835-517e28adc4ff

Yes everything works, but I'm 100%sure if i put on NVME also px5 this will be running on px4 nvme because its a local nvme storage
 
but I'm 100%sure if i put on NVME also px5 this will be running on px4 nvme because its a local nvme storage

What makes you sure on this? How exactly did you verify?

If you create a VM on: px5 with a vdisk on: nvme, this vdisk will be on the local nvme storage of: px5. Otherwise this VM would not be able to start, because there is no magical sharing of local storages over the network in the background!

If you want to test it the hard way: Create a VM on: px5 with the vdisk on: nvme. Then shutdown the: px4 node and start the just created VM. If it starts without a problem, you have the confirmation...
 
Last edited:
I verified this by checking the free disk space on NVME and it is on px4 after i move to px5 i see it on NVME on px5... so only adding a node to that NVME and migrate doesn't do the job.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!