[SOLVED] Converted 40GB (real) VHDX to RAW (256GB full) onto 150GB (real) Local-LVM

Tmanok

Well-Known Member
Hi Everyone,

Working around the clock and I did something potentially stupid but I need help with it first. I'm migrating away from a HyperV environment to Proxmox, my VHDX file was "256GB" but thin provisioned so only occupied 40GB on disk. I created a VM with OMVF BIOS and a 256GB disk on my local-LVM which I forgot was only 150GB real size... Then I ran qemu-img convert from my 40GB VHDX to a RAW /dev/mapper/pve-vm--108-disk-0 over-writting the original disk created by proxmox.

Immediately upon realizing my stupid mistake, I moved the VM from the GUI to my CEPH storage (1.5TB) in a panick. It is still moving, but my question is, did I just break a bunch of stuff? How should I fix this if I did? I was not sure how to run qemu-img convert from a file (VM.vhdx) to a CEPH block storage, that's why I sent it to local-lvm.

Please let me know your thoughts, they will be appreciated.

Tmanok
 
did I just break a bunch of stuff?
I think only if your local file system is now completely filled. What is df -h?

How should I fix this if I did?
You can use the following commands to inspect the LVM storages
Code:
pvs
pvdisplay
vgs
vgdisplay
lvs
lvdisplay

If you are unsure how to continue, then please also post your VM configuration
Code:
qm config 108

I was not sure how to run qemu-img convert from a file
Usually it should not be necessary to use qemu-img yourself. Proxmox VE provides the more high-level commands
Code:
qm importdisk
qm importovf
for imports and if you click on a VM->Hardware then you can move disks between storages and resize them in the GUI
 
Hi Dominic,

Sorry for the delay! The vm appears to be working just fine and the Node who's local storage I over-filled has been running without issue ever since!

df -h on the node:
Bash:
Filesystem            Size  Used Avail Use% Mounted on
udev                   16G     0   16G   0% /dev
tmpfs                 3.2G   34M  3.1G   2% /run
/dev/mapper/pve-root   57G   12G   43G  22% /
tmpfs                  16G   63M   16G   1% /dev/shm
tmpfs                 5.0M     0  5.0M   0% /run/lock
tmpfs                  16G     0   16G   0% /sys/fs/cgroup
tmpfs                  16G   28K   16G   1% /var/lib/ceph/osd/ceph-0
tmpfs                  16G   28K   16G   1% /var/lib/ceph/osd/ceph-2
tmpfs                  16G   28K   16G   1% /var/lib/ceph/osd/ceph-1
tmpfs                  16G   28K   16G   1% /var/lib/ceph/osd/ceph-3
/dev/fuse              30M   36K   30M   1% /etc/pve
tmpfs                 3.2G     0  3.2G   0% /run/user/0

All of the lvm based commands did not point to anything out of the ordinary from what I could tell.

Can "qm importdisk" convert a VHDX?? That is big news to me! I may end up trying that later this week after hours! I've certainly taken advantage of the gui tools, but only after converting from a VHDX file to a RAW file overtop of an existing (and properly sized) disk created in the GUI, then I told the GUI to move the file from a filesystem (Spare SSD most of the time) to my CEPH block storage. I didn't know how to use qemu-img convert to go from a file "/mnt/Crucial1TBSSD/vm.vhdx " to my CEPH storage....

Also note for anyone else doing this, it is faster to move a VHDX file to an EXT4 file system and then run qemu-img convert, than using an NTFS file system because it is CPU bound to 1 CPU core which is VERY slow...


Thanks Dominic!


Tmanok
 
has been running without issue ever since!
Great!
Can "qm importdisk" convert a VHDX??
qm importdisk uses qemu-img convert. While some sources online didn't state vhdx as supported when I looked this up, it has worked during my tests.
 
  • Like
Reactions: Tmanok
Great!

qm importdisk uses qemu-img convert. While some sources online didn't state vhdx as supported when I looked this up, it has worked during my tests.
Excellent, thank you so much! After reading the manual I see the elegance in this simplified high level command! I didn't realize that qm importdisk would convert a disk and output the newly converted disk to the storage of my choice, I was worried that it would simply import the disk and keep it next to the original vhdx file haha... Also not having to worry about virtual disk size is very nice.

Wonderful tool! I'm testing it next week with three VHDX files tied to a single VM, I'll post a follow-up soon. I would like to try qcow2 files, but I'm a little nervous. Do you have any thoughts on going from vhdx to qcow2 over raw? Originally I went from VHDX to RAW with IDE storage controller so that I could install VIRTIO tools, but I'm not sure if I should try vhdx to qcow2 on this next VM. Additionally, should I go from RAW to qcow2 on my already migrated Windows VMs?

Thanks so much Dominic, when the company I work for is able to allocate me a new budget next year, Proxmox support will be very high on my list, this year it went towards these new hypervisors so I can move on from what the last IT team left us with (HyperV of all things!)


Tmanok
 
If you want snapshots then you will need qcow2. Conversion between raw and qcow2 should not be a problem. This is also an option in the GUI at Hardware->Move disk
 
If you want snapshots then you will need qcow2. Conversion between raw and qcow2 should not be a problem. This is also an option in the GUI at Hardware->Move disk
Hey Dominic!

I've successfully snapshotted two Windows VMs with RAW and I included their RAM, rolled back without issue (actually I rolled back because of an issue with the EFI disk which I resolved tonight). Why do they "need" to be qcow2?? I will definitely be converting to qcow2 if the GUI "move" function can do that, that's easy! Thank you for that insight.

Regarding EFI disks, I also learned a few things. Because I was not sure whether "removing" an EFI disk would be the same as "detaching" the disk like other disks, I performed a full backup before doing so because I was certain that my EFI disk was experiencing an issue. Another reason for the backup was that I felt certain that "removing" the EFI disk would damage my snapshots. - To my relief, not only does "remove" actually just "detach" the EFI disk, but actually deleting the disk once it becomes "unused" will actually trigger a helpful error if it is apart of a previous snapshot. Proxmox is much more mature in many aspects than I had previously thought, and for that, I am very very grateful.

Moving my final VM tonight from HyperV to Proxmox, three small VHDXs, I plan on using qm importdisk this time. First I will copy the files from the NTFS drive that I performed the migration with, to an EXT4 drive. Not sure if I mentioned this, but NTFS FUSE/Kernel Module is single threaded on Linux, so when I originally performed a qemu-img convert task, it was bound by 1/40 CPU threads and took eons to complete any conversions. On EXT4, the process is very fast.

Thanks!


Triston
 
  • Like
Reactions: Dominic
Hey Dominic,

I noticed that while I was running qm importdisk, this is the actual underlying command that was running:

/usr/bin/qemu-img convert -p -n -O raw /media/Crucial1TBc/IDM-APP01.vhdx zeroinit:rbd:mx500-array/vm-112-disk-2:conf=/etc/pve/ceph.conf:id=admin:keyring=/etc/pve/priv/...

While my high level command looked like this: qm importdisk 112 /media/Crucial1TBc/IDM-APP01.vhx mx500-array --format qcow2
The discrepency I'm noticing is the format. The underlying qemu-img convert command appears to be using "RAW" instead of "QCOW2". I'm not sure how to check the end product of my command because I'm not sure what to specify as a directory if I wanted to check using qemu-img info...

Thanks Dominic, enjoy your weekend!

Tmanok
 
Hi!

If you use Directory storages then you need qcow2 because such storages in PVE do not assume much about what the underlying directory is really capable of. In other words, the storage itself does not support snapshots. However, qcow2 supports snapshots internally. So that's how you can use snapshots on directory storages.

In contrast, Ceph RBD storages support snapshots. Then it is completely sufficient that PVE supports only raw images (not qcow2, not vhd(x)) on Ceph. If you try to use qcow2 on Ceph then importdisk (silently) fixes that for you.

If you haven't seen it yet, there is a Wiki page about Storages https://pve.proxmox.com/wiki/Storage
with links at the bottom for details about the various storage types.
 
  • Like
Reactions: Tmanok
Hi!

If you use Directory storages then you need qcow2 because such storages in PVE do not assume much about what the underlying directory is really capable of. In other words, the storage itself does not support snapshots. However, qcow2 supports snapshots internally. So that's how you can use snapshots on directory storages.

In contrast, Ceph RBD storages support snapshots. Then it is completely sufficient that PVE supports only raw images (not qcow2, not vhd(x)) on Ceph. If you try to use qcow2 on Ceph then importdisk (silently) fixes that for you.

If you haven't seen it yet, there is a Wiki page about Storages https://pve.proxmox.com/wiki/Storage
with links at the bottom for details about the various storage types.
Hey Dominic, I missed the notification for your comment. My apologies for the delay.

Re: Only directory needs qcow2 as virtual disk type... Very cool! And it makes perfect sense! But when you say "Directory", does that include non-block storage such as SMB/CIFS and NFS or only local and mounted media (Datacentre>Storage>Add>Directory)?

Very cool that importdisk is smart like that, makes the administration just that much more easy to administer. Thanks Dominic!

tmanok
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!