P2V/V2V errors and storage "failed to stat" errors preventing migration to PVE cluster hosts

NCSyFry

Member
Dec 7, 2022
9
2
8
I have a Proxmox 9.1.6 - 3 node cluster with shared iscsi LVM flash storage. I have one VM (VM 100) that I can successfully launch/start/migrate/move between hosts so I know the storage and cluster functions are working.

On the hosts I see this in the system log:

Feb 24 16:32:57 pve2 pvedaemon[3290454]: failed to stat '/dev/ssd-lun1-lvm/vm-100-disk-2.qcow2'
Feb 24 16:32:57 pve2 pvedaemon[3290454]: failed to stat '/dev/ssd-lun1-lvm/vm-100-disk-0.qcow2'
Feb 24 16:32:57 pve2 pvedaemon[3290454]: failed to stat '/dev/ssd-lun1-lvm/vm-100-disk-1.qcow2'

And when I try to P2V a machine using Starwind, I get the following error on the target host:

Feb 24 16:36:37 pve1 pvedaemon[3294352]: unable to create VM 201 - lvcreate 'ssd-lun1-lvm/vm-201-disk-0' error: Run `lvcreate --help' for more information.
Feb 24 16:36:37 pve1 pvedaemon[3292326]: <root@pam> end task UPID:pve1:00324490:05CE5999:699E4415:qmcreate:201:root@pam: unable to create VM 201 - lvcreate 'ssd-lun1-lvm/vm-201-disk-0' error: Run `lvcreate --help' for more information.

And the task error on the host:

--size may not be zero.
TASK ERROR: unable to create VM 201 - lvcreate 'ssd-lun1-lvm/vm-201-disk-0' error: Run `lvcreate --help' for more information.

If I try to view the storage details, I don't see any info in the size column:

Screenshot 2026-02-24 at 4.35.11 PM.jpg

Keep in mind, I can start/run VM 100 no problem.

I also checked to make sure that the iscsi connections where alive on all the hosts, and they were:

Target: iqn.2000-01.com.synology:xxx-uc3400-san.ssd-lun1.789ca40f3c (non-flash)
Current Portal: xxx.xxx.xxx.50:3260,5
Persistent Portal: xxx.xxx.xxx.50:3260,5
**********
Interface:
**********
Iface Name: default
Iface Transport: tcp
Iface Initiatorname: iqn.1993-08.org.debian:01:e2c850d5cd66
Iface IPaddress: xxx.xxx.xxx.xxx
Iface HWaddress: default
Iface Netdev: default
SID: 2
iSCSI Connection State: LOGGED IN
iSCSI Session State: LOGGED_IN
Internal iscsid Session State: NO CHANGE

Any ideas?

Thanks!
 
I've gotten around the migration issue by using Veeam to do a Restore to PVE which worked, but still curious about why the "failed to stat" messages and no size reported in the virtual disks...
 
It looks like there is a conflict between your storage type and how the disks are being addressed.

LVM storage should not have .qcow2 extensions in their paths. Seeing /dev/.../*.qcow2 in your logs suggests that Proxmox might be trying to treat a block device as a file-based directory.
I would recommend:
  1. Double-checking your storage definition in the Proxmox GUI to ensure ssd-lun1-lvm is defined as a block-type LVM storage.
  2. Checking if the Volume Group (VG) is properly activated on all nodes (vgchange -ay).
  3. If you are migrating from ESXi or another hypervisor, try the built-in PVE "Import Wizard" which might bypass the size calculation issues you are seeing with StarWind.
 
The import wizard doesn't work when the vSphere storage is the native vSphere VSAN.... at least for me it didn't work.
 
If you are still struggling with the direct import wizard or encountering "failed to stat" errors, you might want to try a more traditional but highly robust method.

The most reliable approach, especially for very large VMs or when the network connection between ESXi and PVE is less than perfect, is to export your VM from VMware as an OVF/OVA package first.

Once you have the OVF files, you can upload them to your Proxmox host's storage and use the command line to import:
  1. Export the VM as OVF from VMware.
  2. Upload the files to your PVE host (e.g., using SCP or directly into a snippet/ISO storage path).
  3. Run the following command in the PVE shell: qm importovf <new_vmid> <path_to_ovf_file> <target_storage>
While the new ESXi import feature is convenient, the qm importovf method handles large disk images and network interruptions much more gracefully in many environments. It might save you a lot of troubleshooting time!
 
Last edited:
Run the following command in the PVE shell: qm importovf <new_vmid> <path_to_ovf_file> <target_storage>

Before committing, I would first run:
Code:
qm importovf <new_vmid> <path_to_ovf_file> <target_storage> --dryrun
to check that the OVF manifest is correctly populated. This is advisable as different systems produce different manifests.