migration plan from vsan cluster to proxmox

Hello
to migrate from vSAN , i first put in place vsan files services and create an nfs share and mount it on esxi, then i shut vm and storage move to nfs share. On proxmox side i can see nfs share as well, i try to use qm import directly from nfs share
unable to parse volume ID './Clone_mavm.vmx'
using qm import with ESXI_FQDN:ha-datacenter/DS_migration_Proxmox/Clone_mavm/Clone_mavm.vmx works and copy all files

Is there any way to use qm import with an nfs share directly ?

if i use gui i can import vm without disks and in second step i can qm import disk from nfs share (largely faster against gui import), but as i need to bulk import hundreds of vm i try to find cli command to import only vm skel (without importing disks) and use qm import disk from nfs sources

Regards
 
Last edited:
i try to use qm import directly from nfs share
unable to parse volume ID './Clone_mavm.vmx'
using qm import with ESXI_FQDN:ha-datacenter/DS_migration_Proxmox/Clone_mavm/Clone_mavm.vmx works and copy all files

Is there any way to use qm import with an nfs share directly ?
Hi @Xavier Droniou , can you provide exact commands with subsequent output that you executed? As well as your storage configuration:
cat /etc/pve/storage.cfg

To answer your question: Yes, you can can import file from locally mounted path, generally you will need to provide absolute path to the file you are importing.

Cheers


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Bash:
cat /etc/pve/storage.cfg
dir: local
        disable
        path /var/lib/vz
        content snippets
        shared 0

rbd: R3_Ceph_Datastore
        content images,rootdir
        krbd 0
        pool R3_Ceph_Datastore

cephfs: R3_Cephfs
        path /mnt/pve/R3_Cephfs
        content vztmpl,import,iso
        fs-nameR3_Cephfs

pbs: pool_zfs_01
        datastore pool_zfs_01
        server 1.2.7.1
        content backup
        fingerprint e1:d7:71:f3:bc:49:0b:c1:83:5d:80:c3:11:31:e3:82:cf:5c:81:2d:a6:39:f0:10:95:6a:e1:c8:17:40:c5:f7
        prune-backups keep-all=1
        username XXX@XXX

esxi: XXX.XXX.local
        server XXX.XXX.local
        username root
        content import
        skip-cert-verification 1

esxi: YYY.XXX.local
        server YYY.XXX.local
        username root
        content import
        skip-cert-verification 1

nfs: DS_migration_row_Proxmox
        export /nfsmigrationproxmox
        path /mnt/pve/DS_migration_row_Proxmox
        server 1.2.2.4
        content images,import
        prune-backups keep-all=1

command i try is this one
Bash:
id=`pvesh get /cluster/nextid`
qm import $id /mnt/pve/DS_migration_row_Proxmox/Clone_XXX/Clone_XXX.vmx --storage R3_Ceph_Datastore --name BOUM
unable to parse volume ID '/mnt/pve/DS_migration_row_Proxmox/Clone_XXX/Clone_XXX.vmx'

Bash:
pvesm list DS_migration_row_Proxmox
Volid Format  Type      Size VMID

as i try to use nfs (for faster copy) and no VMID seen on pvesm , seems that qm import need it

will try qm create and use --import-from for each scsi disk to see if it works better
 
Looking at the man page of the QM:
Code:
qm import <vmid> <source> --storage <string> [OPTIONS]

       Import a foreign virtual guest from a supported import source, such as an ESXi storage.

       <vmid>: <integer> (100 - 999999999)
           The (unique) ID of the VM.

       <source>: <string>
           The import source volume id.

The <source> option must be in the format of <storage_pool>:path



Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Hey bbgeek17
like i say on first message using qm import with ESXI_FQDN:ha-datacenter/DS_migration_Proxmox/Clone_mavm/Clone_mavm.vmx works and copy all files, but it's slow.
I found the way to import vm skel only (using --scsiXX file=none) using cli and then import disk (qm disk import) from nfs storage (seems faster) but looking at nfs i'm stuck at 40MB/s (v3 or v4.1 , multiple options tested for mount , no change)
I checked and see qemu-img convert in single thread ,
after editing /usr/share/perl5/PVE/QemuServer/QemuImage.pm i added -m 8 -C to options send to qemu-img but no diff on time to convert

BTW i'm stuck here till now

i have 30TB of storage and hundred of vm to move , so trying to find the fastest solution without nfs share i can use to get vm running (nfs from vsan file services is not supported for running vm on it), and like i say vsan storage backed, so no direct path to migrate more smoothly

Indeed was a good try till now, not sure where is bottleneck for importing vmdk to ceph backed storage here
 
Last edited: