Full Disk Copy Issue during Live Migration

footed

New Member
Mar 22, 2024
5
2
3
Hi all,

I have previous experience with Hyper-V, and I'm now trying out Proxmox. I attempted a live migration from one server to another and noticed that the disk is fully copied.
In Hyper-V, only the used blocks in the virtual disk are copied, whereas in Proxmox, if the virtual disk is 128GB, it will copy the entire 128GB.
I've tried using both local ZFS and directory storage with qcow2 format, and the result was the same.
I'm aware of a technique called Thin Provisioning. Does Proxmox support it?
 
local ZFS
AFAIK local ZFS should support thin provisioning. (Just noticed its also indicated to in the official docs attached by UdoB). It probably depends on the exact pool you have setup and how its configured. I personally don't use ZFS.
I understand you performed a Live Migration, in a Proxmox cluster. How much disk space was being used in the original node?
 
Last edited:
AFAIK local ZFS should support thin provisioning. (Just noticed its also indicated to in the official docs attached by UdoB). It probably depends on the exact pool you have setup and how its configured. I personally don't use ZFS.
Thank you for your input. Indeed, Proxmox supports various storage types, and I may not have fully understood the differences between them when selecting the storage type for my setup.
I understand you performed a Live Migration, in a Proxmox cluster. How much disk space was being used in the original node?
As for my specific situation, I'm currently experimenting with Proxmox by setting up a Windows 11 VM for testing purposes. During the live migration process, the virtual disk utilized approximately 17GB of space out of the total 128GB available.

Given this situation, I'm particularly interested in understanding why the live migration process in Proxmox is copying the entire virtual disk rather than just the used space. Any insights or suggestions regarding this specific issue would be greatly appreciated
 
Last edited:
During the live migration process, the virtual disk utilized approximately 17GB of space out of the total 128GB available.
does this mean you have a virtual disk attached to VM and its size is 128GB? And you think its only using physically 17GB?
or does it mean you have a 17GB virtual disk that is carved out of 128GB pool? I've re-read your original post and does seem you meant the above.
Given this situation, I'm particularly interested in understanding why the live migration process in Proxmox is copying the entire virtual disk rather than just the used space
do you mean that a thin disk has become thick or something else? how did you determine that "entire" disk was copied?



Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Last edited:
This is a related discussion:
https://forum.proxmox.com/threads/v...igrating-it-to-lvm-thin-storage.142070/page-2


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
I have attempted this, but I still encountered the issue of sending the entire 128GiB virtual disk to another server during migration.

Here are the details of my virtual disk:

image: vm-100-disk-4.qcow2
file format: qcow2
virtual size: 128 GiB (137438953472 bytes)
disk size: 16.2 GiB
cluster_size: 65536
Format specific information:
compat: 1.1
compression type: zlib
lazy refcounts: false
refcount bits: 16
corrupt: false
extended l2: false
Child node '/file':
filename: vm-100-disk-4.qcow2
protocol type: file
file length: 128 GiB (137460187136 bytes)
disk size: 16.2 GiB

Despite the disk size being only 16.2 GiB, the migration process consistently attempts to send the entire 128GiB data to another server.

I am keen to understand if there are any settings or configurations that can be adjusted to ensure that only the actual disk size (16.2 GiB) is sent during the migration process. Any guidance on resolving this issue would be greatly appreciated.
 
I just for dinner and look the migration detail found this

drive-scsi0: transferred 28.8 GiB of 128.0 GiB (22.54%) in 3m 54s
drive-scsi0: transferred 29.0 GiB of 128.0 GiB (22.64%) in 3m 55s
drive-scsi0: transferred 29.1 GiB of 128.0 GiB (22.72%) in 3m 56s
drive-scsi0: transferred 71.5 GiB of 128.0 GiB (55.83%) in 3m 57s
drive-scsi0: transferred 127.2 GiB of 128.0 GiB (99.39%) in 3m 58s
drive-scsi0: transferred 127.3 GiB of 128.0 GiB (99.48%) in 3m 59s
drive-scsi0: transferred 127.4 GiB of 128.0 GiB (99.56%) in 4m
drive-scsi0: transferred 127.6 GiB of 128.0 GiB (99.72%) in 4m 1s
drive-scsi0: transferred 127.7 GiB of 128.0 GiB (99.80%) in 4m 2s
drive-scsi0: transferred 127.9 GiB of 128.0 GiB (99.88%) in 4m 3s
drive-scsi0: transferred 128.0 GiB of 128.0 GiB (99.97%) in 4m 4s
drive-scsi0: transferred 128.0 GiB of 128.0 GiB (100.00%) in 4m 5s, ready

I've reviewed the details and confirmed my concerns. It appears that indeed the migration process only transfers the used space, not the entire virtual disk. The misunderstanding may have arisen from the displayed data size of 128 GiB, which actually represents the total size of the virtual disk.
 
  • Like
Reactions: Kingneutron
I found out why I misunderstanding transfer 128 GiB.

If I use qcow2 format it will sent actually size not total size of the virtual disk but when I move disk to local-zfs with raw format and it will transfer 128 GiB.
 
  • Like
Reactions: Kingneutron

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!