Hello
I wanted to test a procedure to rename both the Volume Group, on which I created an LVM storage, and the storage itself.
Specifically on an ISCSI LUN I created an LVM storage named "vg-iscsi" , then I created a VM (VM-move (200))
At this point with the VM running on one node in the cluster via CLI I gave the command:
Being able to properly modify the VG
So I edited the /etc/pve/storage.cfg file
FROM
TO
and also I edited "scsi0" value inside the VM config file
FROM
TO
At this point to update the special file that pointed to the old reference:
I ran on all nodes the commands
this has correctly updated the link
By opening the GUI I could verify that all the changes had been correctly transposed, moreover the VM continued to run without any problem
Finally to test the changes I tried several times to do a VM migrates, but always got the following result:
the only way to resolve the issue was to turn the VM off and turn it on at which point the migration correctly worked, what could I analyze to understand the issue?
I wanted to test a procedure to rename both the Volume Group, on which I created an LVM storage, and the storage itself.
Specifically on an ISCSI LUN I created an LVM storage named "vg-iscsi" , then I created a VM (VM-move (200))
vgs
VG #PV #LV #SN Attr VSize VFree
pve 1 1 0 wz--n- <39.50g 19.75g
vg-iscsi 1 1 0 wz--n- 149.99g 99.99g
lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
root pve -wi-ao---- <19.75g
vm-200-disk-0 vg-iscsi -wi-a----- 50.00g
At this point with the VM running on one node in the cluster via CLI I gave the command:
vgrename vg-iscsi vg-iscsi-new
Being able to properly modify the VG
# vgs
VG #PV #LV #SN Attr VSize VFree
pve 1 1 0 wz--n- <39.50g 19.75g
vg-iscsi-new 1 1 0 wz--n- 149.99g 99.99g
So I edited the /etc/pve/storage.cfg file
FROM
lvm: lvm-iscsi
vgname vg-iscsi
base iscsi-truenas:0.0.0.scsi-STrueNAS_iSCSI_Disk_08002734c8fc000
content images
saferemove 1
shared 1
TO
lvm: lvm-iscsi-new
vgname vg-iscsi-new
base iscsi-truenas:0.0.0.scsi-STrueNAS_iSCSI_Disk_08002734c8fc000
content images
saferemove 1
shared 1
and also I edited "scsi0" value inside the VM config file
vi /etc/pve/nodes/proxmox1/qemu-server/200.conf
FROM
scsi0: lvm-iscsi:vm-200-disk-0,iothread=1,size=50G
TO
scsi0: lvm-iscsi-new:vm-200-disk-0,iothread=1,size=50G
At this point to update the special file that pointed to the old reference:
/dev/mapper/vg--iscsi-vm--200--disk--0 -> ../dm-1
I ran on all nodes the commands
vgscan
vgchange -ay
this has correctly updated the link
/dev/mapper/vg--iscsi--new-vm--200--disk--0 -> ../dm-1
By opening the GUI I could verify that all the changes had been correctly transposed, moreover the VM continued to run without any problem
Finally to test the changes I tried several times to do a VM migrates, but always got the following result:
Code:
2024-11-19 15:42:41 starting migration of VM 200 to node 'proxmox2' (10.0.2.20)
2024-11-19 15:42:41 starting VM 200 on remote node 'proxmox2'
2024-11-19 15:42:50 start remote tunnel
2024-11-19 15:42:52 ssh tunnel ver 1
2024-11-19 15:42:52 starting online/live migration on unix:/run/qemu-server/200.migrate
2024-11-19 15:42:52 set migration capabilities
2024-11-19 15:42:52 migration downtime limit: 100 ms
2024-11-19 15:42:52 migration cachesize: 128.0 MiB
2024-11-19 15:42:52 set migration parameters
2024-11-19 15:42:52 start migrate command to unix:/run/qemu-server/200.migrate
2024-11-19 15:42:53 migration active, transferred 42.4 MiB of 1.0 GiB VM-state, 94.7 MiB/s
2024-11-19 15:42:53 xbzrle: send updates to 138 pages in 5.8 KiB encoded memory
2024-11-19 15:42:54 migration active, transferred 80.8 MiB of 1.0 GiB VM-state, 39.5 MiB/s
2024-11-19 15:42:54 xbzrle: send updates to 138 pages in 5.8 KiB encoded memory
2024-11-19 15:42:55 migration active, transferred 115.9 MiB of 1.0 GiB VM-state, 45.7 MiB/s
2024-11-19 15:42:55 xbzrle: send updates to 138 pages in 5.8 KiB encoded memory
2024-11-19 15:42:56 migration active, transferred 147.4 MiB of 1.0 GiB VM-state, 45.5 MiB/s
2024-11-19 15:42:56 xbzrle: send updates to 138 pages in 5.8 KiB encoded memory
2024-11-19 15:42:57 migration active, transferred 203.6 MiB of 1.0 GiB VM-state, 69.5 MiB/s
2024-11-19 15:42:57 xbzrle: send updates to 138 pages in 5.8 KiB encoded memory
2024-11-19 15:42:58 migration active, transferred 267.5 MiB of 1.0 GiB VM-state, 67.9 MiB/s
2024-11-19 15:42:58 xbzrle: send updates to 138 pages in 5.8 KiB encoded memory
2024-11-19 15:42:59 migration active, transferred 318.4 MiB of 1.0 GiB VM-state, 54.4 MiB/s
2024-11-19 15:42:59 xbzrle: send updates to 138 pages in 5.8 KiB encoded memory
2024-11-19 15:43:00 migration active, transferred 355.9 MiB of 1.0 GiB VM-state, 47.7 MiB/s
2024-11-19 15:43:00 xbzrle: send updates to 138 pages in 5.8 KiB encoded memory
2024-11-19 15:43:01 migration active, transferred 383.8 MiB of 1.0 GiB VM-state, 263.5 MiB/s
2024-11-19 15:43:01 xbzrle: send updates to 138 pages in 5.8 KiB encoded memory
2024-11-19 15:43:02 migration active, transferred 424.2 MiB of 1.0 GiB VM-state, 45.9 MiB/s
2024-11-19 15:43:02 xbzrle: send updates to 138 pages in 5.8 KiB encoded memory
2024-11-19 15:43:04 migration active, transferred 483.7 MiB of 1.0 GiB VM-state, 27.2 MiB/s
2024-11-19 15:43:04 xbzrle: send updates to 138 pages in 5.8 KiB encoded memory
2024-11-19 15:43:05 migration active, transferred 517.6 MiB of 1.0 GiB VM-state, 37.6 MiB/s
2024-11-19 15:43:05 xbzrle: send updates to 138 pages in 5.8 KiB encoded memory
2024-11-19 15:43:06 migration active, transferred 549.8 MiB of 1.0 GiB VM-state, 36.1 MiB/s
2024-11-19 15:43:06 xbzrle: send updates to 138 pages in 5.8 KiB encoded memory
2024-11-19 15:43:07 migration active, transferred 594.0 MiB of 1.0 GiB VM-state, 310.5 MiB/s
2024-11-19 15:43:07 xbzrle: send updates to 138 pages in 5.8 KiB encoded memory
2024-11-19 15:43:08 migration active, transferred 649.0 MiB of 1.0 GiB VM-state, 64.6 MiB/s
2024-11-19 15:43:08 xbzrle: send updates to 138 pages in 5.8 KiB encoded memory
2024-11-19 15:43:09 migration active, transferred 701.4 MiB of 1.0 GiB VM-state, 53.3 MiB/s
2024-11-19 15:43:09 xbzrle: send updates to 138 pages in 5.8 KiB encoded memory
2024-11-19 15:43:10 migration active, transferred 757.5 MiB of 1.0 GiB VM-state, 39.4 MiB/s
2024-11-19 15:43:10 xbzrle: send updates to 138 pages in 5.8 KiB encoded memory
2024-11-19 15:43:11 migration active, transferred 768.2 MiB of 1.0 GiB VM-state, 12.1 MiB/s
2024-11-19 15:43:11 xbzrle: send updates to 138 pages in 5.8 KiB encoded memory
2024-11-19 15:43:12 migration active, transferred 771.7 MiB of 1.0 GiB VM-state, 3.2 MiB/s
2024-11-19 15:43:12 xbzrle: send updates to 138 pages in 5.8 KiB encoded memory
2024-11-19 15:43:13 migration active, transferred 779.7 MiB of 1.0 GiB VM-state, 4.6 MiB/s
2024-11-19 15:43:13 xbzrle: send updates to 138 pages in 5.8 KiB encoded memory
2024-11-19 15:43:16 migration active, transferred 825.8 MiB of 1.0 GiB VM-state, 5.0 MiB/s
2024-11-19 15:43:16 xbzrle: send updates to 138 pages in 5.8 KiB encoded memory, cache-miss 0.52%
2024-11-19 15:43:16 migration status error: failed
2024-11-19 15:43:16 ERROR: online migrate failure - aborting
2024-11-19 15:43:16 aborting phase 2 - cleanup resources
2024-11-19 15:43:16 migrate_cancel
2024-11-19 15:43:27 ERROR: migr
the only way to resolve the issue was to turn the VM off and turn it on at which point the migration correctly worked, what could I analyze to understand the issue?