Proxmox VE 6.0 - Migrating VM with direct attached LUN

Dariusz Bilewicz

Active Member
Sep 13, 2018
25
0
41
43
Hi,

I have question about migrating VM with direct attached LUN.

This is my config:

bootdisk: scsi0
cores: 2
ide2: none,media=cdrom
memory: 16384
name: sowa
net0: virtio=FE:5F:80:5D:76:C1,bridge=vmbr0,firewall=1,tag=380
numa: 0
onboot: 1
ostype: l26
scsi0: huawei-lvm:vm-402-disk-0,size=32G
scsihw: virtio-scsi-pci
smbios1: uuid=05cd5857-9531-495d-b6de-81608a090585
sockets: 2
virtio2: /dev/disk/by-id/scsi-36b44326100e946f165aa08c500000003,backup=0,size=5T
vmgenid: f307a6c2-4505-483a-be4d-f2aa624ea9fe

As you can see i have LUN "/dev/disk/by-id/scsi-36b44326100e946f165aa08c500000003" attached to my VM and i'd like to move it to another node. LUN that i've attached manually is accessible from both nodes but when i try to migrate VM it looks like migration tool want to create lvm on my 'Disk image' storage and move data from LUN there.

Is it possible to migrate such a machine without downtime and without need to detach of directly attached block devices?
 
You could try to set the shared option, see man qm.
shared=<boolean> (default = 0)
Mark this locally-managed volume as available on all nodes.

Warning
This option does not share the volume automatically, it assumes it is shared already!
 
  • Like
Reactions: Dariusz Bilewicz
Thanks for the answer.

Unfortunately, it only works offline due to an error:

2019-08-23 13:51:32 starting migration of VM 402 to node 'proxmox5' (172.19.0.6)
2019-08-23 13:51:32 found local disk 'huawei-lvm:vm-402-disk-0' (in current VM config)
2019-08-23 13:51:32 copying disk images
2019-08-23 13:51:32 starting VM 402 on remote node 'proxmox5'
2019-08-23 13:51:34 unable to parse volume ID '/dev/disk/by-id/scsi-36b44326100e946f165aa08c500000003'
2019-08-23 13:51:34 ERROR: online migrate failure - command '/usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=proxmox5' root@172.19.0.6 qm start 402 --skiplock --migratedfrom proxmox4 --migration_type secure --stateuri unix --machine pc-i440fx-4.0 --targetstorage huawei-lvm' failed: exit code 255
2019-08-23 13:51:34 aborting phase 2 - cleanup resources
2019-08-23 13:51:34 migrate_cancel
2019-08-23 13:51:35 ERROR: migration finished with problems (duration 00:00:04)
TASK ERROR: migration problems

But it's better this way than if I had to disconnect the volume.
 
Hi,

Sure, i agree that LVM Storage is best for online migration, but i have need for 5 TB volume mounted into specific VM. Online migration with data copy is impossible in that case not only because of needed time for such task, but most important in that case i would need to attach not only one 5 TB LUN for my VM but as many as i have nodes in cluster.

Thanks anyway, the solution with "shared=1" is enough for that scenario.