Live Migration creating double disks

ady106034

New Member
Dec 20, 2022
25
1
3
Hello,

Proxmox: 6.4
I'm migrating the VM's within a cluster from one node to another. When Migrating , first same identical lvm is being created on new node and then migration will start and after that again new lvm is being created on new node and that lvm is being used on migrated server. Due to this double space is being used.

So, Basically 2 lvm disks are being created for single server and we need to delete it manually. It never happened earlier.

We are using thin LVM.
 
Proxmox: 6.4
Proxmox 6.x is EOL since 2022-07, we recommend upgrading to the supported 7.x series as soon as possible:
https://pve.proxmox.com/wiki/Upgrade_from_6.x_to_7.0

I'm migrating the VM's within a cluster from one node to another. When Migrating , first same identical lvm is being created on new node and then migration will start and after that again new lvm is being created on new node and that lvm is being used on migrated server. Due to this double space is being used.
Well, is the LVM actually a shared one (e.g., provided through iSCSI or the like)?
If so, did you tick the "shared" checkbox on the respective storage entry to make Proxmox VE aware of this?
As otherwise it thinks those are two different, locally available VGs that have the same name, and thus it does a local-storage live-migration.
 
please post the full task log of the migration - and please note that migration will pick up any unused/orphaned disks associated with the VM as well and migrate them..
 
We have lvm thin in source and destination both are different. Both are not marked as shared.
We have added the storage to both data centers via datacenter> storage> added thin_lvm.
 

Attachments

  • Selection_475.png
    Selection_475.png
    21.9 KB · Views: 9
2023-01-19 22:02:19 starting migration of VM 4191 to node 'us03' (IP)
Command failed with status code 5.
command '/sbin/vgscan --ignorelockingfailure --mknodes' failed: exit code 5
2023-01-19 22:02:19 found local disk 'lvthin1:vm-4191-dlVuQBHRbM87WVRU-2eM9zHFd7XBox5D7' (via storage)
2023-01-19 22:02:19 found local disk 'thin_pool:vm-4191-dlVuQBHRbM87WVRU-2eM9zHFd7XBox5D7' (in current VM config)
2023-01-19 22:02:19 copying local disk images
2023-01-19 22:02:21 Logical volume "vm-4191-dlVuQBHRbM87WVRU-2eM9zHFd7XBox5D7" created.
2023-01-19 23:20:24 8355840+0 records in
2023-01-19 23:20:24 8355840+0 records out
2023-01-19 23:20:24 547608330240 bytes (548 GB, 510 GiB) copied, 4683.13 s, 117 MB/s
2023-01-19 23:20:44 1499+33399250 records in
2023-01-19 23:20:44 1499+33399250 records out
2023-01-19 23:20:44 547608330240 bytes (548 GB, 510 GiB) copied, 4702.66 s, 116 MB/s
2023-01-19 23:20:44 successfully imported 'us03_thin_pool:vm-4191-dlVuQBHRbM87WVRU-2eM9zHFd7XBox5D7'
2023-01-19 23:20:44 volume 'lvthin1:vm-4191-dlVuQBHRbM87WVRU-2eM9zHFd7XBox5D7' is 'us03_thin_pool:vm-4191-dlVuQBHRbM87WVRU-2eM9zHFd7XBox5D7' on the target
2023-01-19 23:20:44 starting VM 4191 on remote node 'us03'
2023-01-19 23:20:46 volume 'thin_pool:vm-4191-dlVuQBHRbM87WVRU-2eM9zHFd7XBox5D7' is 'us03_thin_pool:vm-4191-disk-0' on the target
2023-01-19 23:20:46 start remote tunnel
2023-01-19 23:20:47 ssh tunnel ver 1
2023-01-19 23:20:47 starting storage migration
2023-01-19 23:20:47 sata0: start migration to nbd:unix:/run/qemu-server/4191_nbd.migrate:exportname=drive-sata0
drive mirror is starting for drive-sata0
drive-sata0: transferred 0.0 B of 510.0 GiB (0.00%) in 0s
drive-sata0: transferred 107.0 MiB of 510.0 GiB (0.02%) in 1s
drive-sata0: transferred 218.0 MiB of 510.0 GiB (0.04%) in 2s
drive-sata0: transferred 330.0 MiB of 510.0 GiB (0.06%) in 3s
dri
drive-sata0: transferred 3.2 GiB of 510.0 GiB (0.62%) in 29s
drive-sata0: transferred 3.3 GiB of 510.0 GiB (0.64%) in 30s
drive-sata0: transferred 3.4 GiB of 510.0 GiB (0.66%) in 31s
drive-sata0: transferred 3.5 GiB of 510.0 GiB (0.68%) in 32s
drive-sata0: transferred 3.6 GiB of 510.0 GiB (0.70%) in 33s
drive-sata0: transferred 3.7 GiB of 510.0 GiB (0.73%) in 34s
drive-sata0: transferred 3.8 GiB of 510.0 GiB (0.75%) in 35s
drive-sata0: transferred 3.9 GiB of 510.0 GiB (0.77%) in 36s
drive-sata0: transferred 4.0 GiB of 510.0 GiB (0.79%) in 37s
....
.....
2023-01-20 00:40:44 migration active, transferred 9.2 GiB of 8.3 GiB VM-state, 111.3 MiB/s
2023-01-20 00:40:45 migration active, transferred 9.4 GiB of 8.3 GiB VM-state, 112.5 MiB/s
2023-01-20 00:40:46 migration active, transferred 9.5 GiB of 8.3 GiB VM-state, 111.5 MiB/s
2023-01-20 00:40:47 migration active, transferred 9.6 GiB of 8.3 GiB VM-state, 133.8 MiB/s
2023-01-20 00:40:47 xbzrle: send updates to 28776 pages in 15.5 MiB encoded memory, cache-miss 99.04%, overflow 680
2023-01-20 00:40:47 average migration speed: 96.8 MiB/s - downtime 39 ms
2023-01-20 00:40:47 migration status: completed
all 'mirror' jobs are ready
drive-sata0: Completing block job_id...
drive-sata0: Completed successfully.
drive-sata0: mirror-job finished
2023-01-20 00:40:48 stopping NBD storage migration server on target.
Logical volume "vm-4191-dlVuQBHRbM87WVRU-2eM9zHFd7XBox5D7" successfully removed
2023-01-20 00:41:18 migration finished successfully (duration 02:38:59)
TASK OK
 
2023-01-19 23:20:44 volume 'lvthin1:vm-4191-dlVuQBHRbM87WVRU-2eM9zHFd7XBox5D7' is 'us03_thin_pool:vm-4191-dlVuQBHRbM87WVRU-2eM9zHFd7XBox5D7' on the target
2023-01-19 23:20:44 starting VM 4191 on remote node 'us03'
2023-01-19 23:20:46 volume 'thin_pool:vm-4191-dlVuQBHRbM87WVRU-2eM9zHFd7XBox5D7' is 'us03_thin_pool:vm-4191-disk-0' on the target
2023-01-19 23:20:46 start remote tunnel

This may cause the issue
 
2023-01-19 22:02:19 found local disk 'lvthin1:vm-4191-dlVuQBHRbM87WVRU-2eM9zHFd7XBox5D7' (via storage)
2023-01-19 22:02:19 found local disk 'thin_pool:vm-4191-dlVuQBHRbM87WVRU-2eM9zHFd7XBox5D7' (in current VM config)

you have the same thin pool configured twice, once as storage "thin_pool", once as storage "lvthin1"..
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!