Diskmigration - name has changed on target

fireon

Distinguished Member
Oct 25, 2010
4,475
466
153
Austria/Graz
deepdoc.at
Hello all,

We migrated from an existing Proxmox7 (pve-manager/7.4-17/513c62be running kernel: 5.15.116-1-pve) SAN cluster to a new Proxmox8 (proxmox-ve: 8.2.0 running kernel: 6.8.4-2-pve)Ceph cluster.

VM migration: CEPHFS on the target with an NFS server. Bindversion v4.2. The share was mounted on both clusters. The disks were moved during vm's were running. This worked very well up to one disk.

Code:
create full clone of drive scsi4 (san-vm:vm-202-disk-2)
Formatting '/mnt/pve/ceph_migrate/images/202/vm-202-disk-1.qcow2', fmt=qcow2 cluster_size=65536 extended_l2=off preallocation=metadata compression_type=zlib size=279172874240 lazy_refcounts=off refcount_bits=16
drive mirror is starting for drive-scsi4

Why the disk was renamed on the targed? In my opinion, that makes no sense. Has anyone observed this behavior before?

Very thanks
best regards
fireon
 
/mnt/pve/ceph_migrate/images/202/vm-202-disk-1.qcow2
Why do you store VM disks in a Ceph FS and not an RBD pool?

Besides that, I guess you ask why the source is san-vm:vm-202-disk-2 and the target vm-202-disk-1.qcow2? The storage plugins will usually just use the next free numerical ID at the end for that storage. Therefore, the order in which the new disk images are allocated plays a role.
Another factor can be, that in the past, that number might have already been in use and the source got a higher number for that reason.
 
Why do you store VM disks in a Ceph FS and not an RBD pool?
It was only for migration. After that VM was shutdown on the source and start again on the target. Then we moved the disk do RBD.

The storage plugins will usually just use the next free numerical ID at the end for that storage. Therefore, the order in which the new disk images are allocated plays a role.

Ok, that explains a lot. Thank you.

My colleague just told me that the first disk was overwritten on the second disk during the migration. The process then stopped exactly at the end of the disk (size of the first disk). We are trying to recreate the behavior.
 
It's possible to do cross-cluster migration (and online) with cli:

"qm remote-migrate <vmid> [<target-vmid>] <target-endpoint> --target-bridge <string> --target-storage <string> [OPTIONS]"


It'll rename the vmid in config, and no need to use temp cephfs.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!