I've got a PVE system running 8.2.4 (16 cores, 128gb ram) with TrueNas managing my storage via pci-passthru.
I've been migrating my VMs from a different system with qm remote-migrate, but that's only successful using the local-lvm storage, which is a 900gb ssd. So I stage my migrations there, and then move the storage to my NFS mounted zRaid.
For my first 3 migrations, the disk size was under 64gb, and those migrated successfully.
My second-largest VM has a storage size of 450gb, and when I try migrating it, I can see it start; i watch the size of the new drive being created on the NFS mounted storage pool, but it times out after about a minute.
It seems as if there's a hard-limit timer for creating the disks, which is illogical - as long as the format is continuing to take place (stat of the qcow file shows increasing size) then the command should be allowed to continue.
Alternatively, the timeout should be allowed to be configured in the settings, as well on the "qm disk move" cli
In addition, it looks like there's no way to move a VM with a TPM state datastore attached as well
I'm really stuck here, with that previously mentioned 450gb VM, and a second VM with a 1.5tb datastore that will *not* fit on the 'local-lvm' staging
How can I work around this unexpected and poor behavior?
I've been migrating my VMs from a different system with qm remote-migrate, but that's only successful using the local-lvm storage, which is a 900gb ssd. So I stage my migrations there, and then move the storage to my NFS mounted zRaid.
For my first 3 migrations, the disk size was under 64gb, and those migrated successfully.
My second-largest VM has a storage size of 450gb, and when I try migrating it, I can see it start; i watch the size of the new drive being created on the NFS mounted storage pool, but it times out after about a minute.
It seems as if there's a hard-limit timer for creating the disks, which is illogical - as long as the format is continuing to take place (stat of the qcow file shows increasing size) then the command should be allowed to continue.
Alternatively, the timeout should be allowed to be configured in the settings, as well on the "qm disk move" cli
In addition, it looks like there's no way to move a VM with a TPM state datastore attached as well
I'm really stuck here, with that previously mentioned 450gb VM, and a second VM with a 1.5tb datastore that will *not* fit on the 'local-lvm' staging
How can I work around this unexpected and poor behavior?
Last edited: