LVM LUN on iscsi storage presented to a cluster, and compute-only migrations?

surfrock66

Active Member
Feb 10, 2020
40
8
28
41
I have a question about how storage works on Proxmox, and I'm not sure if I've set up my storage incorrectly. I am coming from an ESXi environment where there are compute-only migrations and am looking to emulate that behavior in Proxmox.

I have 8 hosts, each with a fiber nic connected to a switch with 2 paths to the SAN. I set them up in proxmox and connected a volume from my SAN to each host; it's the same volume presented to all the hosts.

I then clustered the hosts; when I click my datacenter/cluster in proxmox and go to the storage tab, I see the iscsi targets listed. I created an LVM volume and it shows up perfectly.

I then created a VM. I attempted a migration, and it worked, however it copied the disk. This is what the migration task log shows:

Code:
2021-09-28 09:40:05 use dedicated network address for sending migration traffic (10.1.###.##5)
2021-09-28 09:40:05 starting migration of VM 100 to node 'PROX-05' (10.1.###.##5)
2021-09-28 09:40:06 found local disk 'DS-PROX-Servers:vm-100-disk-1' (in current VM config)
2021-09-28 09:40:06 starting VM 100 on remote node 'PROX-05'
2021-09-28 09:40:09 volume 'DS-PROX-Servers:vm-100-disk-1' is 'DS-PROX-Servers:vm-100-disk-0' on the target
2021-09-28 09:40:09 start remote tunnel
2021-09-28 09:40:10 ssh tunnel ver 1
2021-09-28 09:40:10 starting storage migration
2021-09-28 09:40:10 scsi0: start migration to nbd:unix:/run/qemu-server/100_nbd.migrate:exportname=drive-scsi0
drive mirror is starting for drive-scsi0
drive-scsi0: transferred 0.0 B of 80.0 GiB (0.00%) in 0s
drive-scsi0: transferred 104.0 MiB of 80.0 GiB (0.13%) in 1s
.....
drive-scsi0: transferred 80.0 GiB of 80.0 GiB (100.00%) in 12m 23s, ready
all 'mirror' jobs are ready
2021-09-28 09:52:33 starting online/live migration on unix:/run/qemu-server/100.migrate
2021-09-28 09:52:33 set migration capabilities
2021-09-28 09:52:33 migration downtime limit: 100 ms
2021-09-28 09:52:33 migration cachesize: 512.0 MiB
2021-09-28 09:52:33 set migration parameters
2021-09-28 09:52:33 start migrate command to unix:/run/qemu-server/100.migrate
2021-09-28 09:52:34 migration active, transferred 107.3 MiB of 4.0 GiB VM-state, 114.8 MiB/s
...
2021-09-28 09:52:46 migration active, transferred 1.3 GiB of 4.0 GiB VM-state, 119.1 MiB/s
2021-09-28 09:52:46 average migration speed: 316.4 MiB/s - downtime 173 ms
2021-09-28 09:52:46 migration status: completed
all 'mirror' jobs are ready
drive-scsi0: Completing block job_id...
drive-scsi0: Completed successfully.
drive-scsi0: mirror-job finished
2021-09-28 09:52:47 stopping NBD storage migration server on target.
  Logical volume "vm-100-disk-1" successfully removed
2021-09-28 09:52:56 migration finished successfully (duration 00:12:51)
TASK OK

So it's copying the disk to the same location. Did I set this up wrong? Or is this how Proxmox migrations should work, and there is no equivalent concept of compute-only migration?
 
Ok, I didn't understand the shared setting, that did the trick.

I'll read into it; this is our lab environment as an eval, so I'll go through that article and see what we should do different architecting this for a move to prod.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!