Hey,
I'm considering a migration from my current ESXi setup with Freenas providing ZFS and datastore for VMs to Proxmox. Haven't had any issues with this setup, but would like to give a try on Proxmox and LXCs.
In the current setup I'm using thin provisioning for VMs (vmdk), initially the max allowed size of VM disk space is large - few machines are > 100GB , one 1TB. (actual used space is less than 600GB in total for all VMs).
Have another machine with proxmox and ZFS setup, so I'm trying to convert vmdk files to a proxmox KVM machines, which is working fine, but I could not make qemu-img or ZFS to ignore "unallocated space".
When using :
qemu-img convert -f vmdk -O raw <vmdk_file_location> /dev/zpool/VM/VM-disk convertion requires VM space to be equal to 100GB+ and actually allocates that space, no matter that it's thin provisioned and actual used space within VM is much less.
Tried with -S 4k (also with 1k) but nothing changed, conversion still consumes whole space.
If I make a conversion to qcow2 the output file is with correct size (same as allocated in ESXi VM, ignoring remaining free space residing in VM), but when I assign the file to VM it does not boot or even recognize it during boot process. It's simple not in the list and boot options are 1)CD 2) Network Boot 3)Legacy Rom .
Tried : qemu-img convert -f qcow2 -O raw /location_to_qcow2 /dev/zvol/storage/vm-XXX-disk1 . did not work either, as machine does not boot.
Made a zvol myself with sparse enabled , modified already existing zvols with zfs set refreservation=none tank .. no luck.
Apparently I'm missing a vital step in the process and would be appreciated if someone might give some ideas.
My Proxmox machine has 80GB DDR3 ECC and 3TB ZFS mirror with ZIL SSD.
The ultimate goal is to migrate to Proxmox ZFS current thin provisioned VMs hosted on ESXi.
Approximate given disk size to ESXi VMs: 4.5TB
Actual used size by all VMs : 0.6TB
Thank you for your cooperation.
I'm considering a migration from my current ESXi setup with Freenas providing ZFS and datastore for VMs to Proxmox. Haven't had any issues with this setup, but would like to give a try on Proxmox and LXCs.
In the current setup I'm using thin provisioning for VMs (vmdk), initially the max allowed size of VM disk space is large - few machines are > 100GB , one 1TB. (actual used space is less than 600GB in total for all VMs).
Have another machine with proxmox and ZFS setup, so I'm trying to convert vmdk files to a proxmox KVM machines, which is working fine, but I could not make qemu-img or ZFS to ignore "unallocated space".
When using :
qemu-img convert -f vmdk -O raw <vmdk_file_location> /dev/zpool/VM/VM-disk convertion requires VM space to be equal to 100GB+ and actually allocates that space, no matter that it's thin provisioned and actual used space within VM is much less.
Tried with -S 4k (also with 1k) but nothing changed, conversion still consumes whole space.
If I make a conversion to qcow2 the output file is with correct size (same as allocated in ESXi VM, ignoring remaining free space residing in VM), but when I assign the file to VM it does not boot or even recognize it during boot process. It's simple not in the list and boot options are 1)CD 2) Network Boot 3)Legacy Rom .
Tried : qemu-img convert -f qcow2 -O raw /location_to_qcow2 /dev/zvol/storage/vm-XXX-disk1 . did not work either, as machine does not boot.
Made a zvol myself with sparse enabled , modified already existing zvols with zfs set refreservation=none tank .. no luck.
Apparently I'm missing a vital step in the process and would be appreciated if someone might give some ideas.
My Proxmox machine has 80GB DDR3 ECC and 3TB ZFS mirror with ZIL SSD.
The ultimate goal is to migrate to Proxmox ZFS current thin provisioned VMs hosted on ESXi.
Approximate given disk size to ESXi VMs: 4.5TB
Actual used size by all VMs : 0.6TB
Thank you for your cooperation.