When I import a VMDK/OVA into Proxmox and I target Ceph it always fails with the following
If I change the storage from Ceph to NFS/SMB/LVM it will import without fail. Now, when I go to move this newly imported disk over to Ceph it will fail with the exact same error above. That is until i resize it by +4GB/8GB then I can migrate the disk over.
The disk works just fine after its imported to NFS, the VM works and is accessible so its not an issue with the OVA or its packaged VMDKs.
Don't know if this is a bug or a defect with the import process due to Ceph's expected sizing/block size. But while the migration to Ceph fails i can move this newly imported disk around NFS-SMB-LVM without fail in any direction.
Got 5 or so OVAs that this is happening too, all are packaged for VMware.
Is there no way to tell the qm disk import what target size to make the imported disk? As resizing the disk fixes this issue each time, but I have to import to NFS/other first, which seems redundant at best.
Code:
qm importdisk 104 /mnt/pve/nfs-backups/import/LoadMaster-VLM-7.2.61.0.22763.RELEASE-VMware-OVF-FREE.vmdk ceph-vms
importing disk '/mnt/pve/nfs-backups/import/LoadMaster-VLM-7.2.61.0.22763.RELEASE-VMware-OVF-FREE.vmdk' to VM 104 ...
/dev/rbd6
transferred 0.0 B of 16.0 GiB (0.00%)
qemu-img: output file is smaller than input file
Removing image: 1% complete...
Removing image: 2% complete...
Removing image: 3% complete...
Removing image: 4% complete...
Removing image: 5% complete...
Removing image: 6% complete...
Removing image: 7% complete...
Removing image: 8% complete...
Removing image: 9% complete...
Removing image: 10% complete...
Removing image: 11% complete...
Removing image: 12% complete...
Removing image: 13% complete...
Removing image: 14% complete...
Removing image: 15% complete...
Removing image: 16% complete...
Removing image: 17% complete...
Removing image: 18% complete...
Removing image: 19% complete...
Removing image: 20% complete...
Removing image: 21% complete...
Removing image: 22% complete...
Removing image: 23% complete...
Removing image: 24% complete...
Removing image: 25% complete...
Removing image: 26% complete...
Removing image: 27% complete...
Removing image: 28% complete...
Removing image: 29% complete...
Removing image: 30% complete...
Removing image: 31% complete...
Removing image: 32% complete...
Removing image: 33% complete...
Removing image: 34% complete...
Removing image: 35% complete...
Removing image: 36% complete...
Removing image: 37% complete...
Removing image: 38% complete...
Removing image: 39% complete...
Removing image: 40% complete...
Removing image: 41% complete...
Removing image: 42% complete...
Removing image: 43% complete...
Removing image: 44% complete...
Removing image: 45% complete...
Removing image: 46% complete...
Removing image: 47% complete...
Removing image: 48% complete...
Removing image: 49% complete...
Removing image: 50% complete...
Removing image: 51% complete...
Removing image: 52% complete...
Removing image: 53% complete...
Removing image: 54% complete...
Removing image: 55% complete...
Removing image: 56% complete...
Removing image: 57% complete...
Removing image: 58% complete...
Removing image: 59% complete...
Removing image: 60% complete...
Removing image: 61% complete...
Removing image: 62% complete...
Removing image: 63% complete...
Removing image: 64% complete...
Removing image: 65% complete...
Removing image: 66% complete...
Removing image: 67% complete...
Removing image: 68% complete...
Removing image: 69% complete...
Removing image: 70% complete...
Removing image: 71% complete...
Removing image: 72% complete...
Removing image: 73% complete...
Removing image: 74% complete...
Removing image: 75% complete...
Removing image: 76% complete...
Removing image: 77% complete...
Removing image: 78% complete...
Removing image: 79% complete...
Removing image: 80% complete...
Removing image: 81% complete...
Removing image: 82% complete...
Removing image: 83% complete...
Removing image: 84% complete...
Removing image: 85% complete...
Removing image: 86% complete...
Removing image: 87% complete...
Removing image: 88% complete...
Removing image: 89% complete...
Removing image: 90% complete...
Removing image: 91% complete...
Removing image: 92% complete...
Removing image: 93% complete...
Removing image: 94% complete...
Removing image: 95% complete...
Removing image: 96% complete...
Removing image: 97% complete...
Removing image: 98% complete...
Removing image: 99% complete...
Removing image: 100% complete...done.
copy failed: command '/usr/bin/qemu-img convert -p -n -f vmdk -O raw /mnt/pve/nfs-backups/import/LoadMaster-VLM-7.2.61.0.22763.RELEASE-VMware-OVF-FREE.vmdk zeroinit:/dev/rbd-pve/b909c21f-6165-4feb-bba4-b42a69ec85cf/ceph-vms/vm-104-disk-1' failed: exit code 1
If I change the storage from Ceph to NFS/SMB/LVM it will import without fail. Now, when I go to move this newly imported disk over to Ceph it will fail with the exact same error above. That is until i resize it by +4GB/8GB then I can migrate the disk over.
The disk works just fine after its imported to NFS, the VM works and is accessible so its not an issue with the OVA or its packaged VMDKs.
Don't know if this is a bug or a defect with the import process due to Ceph's expected sizing/block size. But while the migration to Ceph fails i can move this newly imported disk around NFS-SMB-LVM without fail in any direction.
Got 5 or so OVAs that this is happening too, all are packaged for VMware.
Is there no way to tell the qm disk import what target size to make the imported disk? As resizing the disk fixes this issue each time, but I have to import to NFS/other first, which seems redundant at best.