pve 8.2.4 times out moving disk

ak_hepcat

New Member
Sep 7, 2023
1
0
1
Anchorage, AK
akhepcat.com
I've got a PVE system running 8.2.4 (16 cores, 128gb ram) with TrueNas managing my storage via pci-passthru.

I've been migrating my VMs from a different system with qm remote-migrate, but that's only successful using the local-lvm storage, which is a 900gb ssd. So I stage my migrations there, and then move the storage to my NFS mounted zRaid.

For my first 3 migrations, the disk size was under 64gb, and those migrated successfully.

My second-largest VM has a storage size of 450gb, and when I try migrating it, I can see it start; i watch the size of the new drive being created on the NFS mounted storage pool, but it times out after about a minute.

It seems as if there's a hard-limit timer for creating the disks, which is illogical - as long as the format is continuing to take place (stat of the qcow file shows increasing size) then the command should be allowed to continue.

Alternatively, the timeout should be allowed to be configured in the settings, as well on the "qm disk move" cli

In addition, it looks like there's no way to move a VM with a TPM state datastore attached as well

I'm really stuck here, with that previously mentioned 450gb VM, and a second VM with a 1.5tb datastore that will *not* fit on the 'local-lvm' staging

How can I work around this unexpected and poor behavior?
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!