I will convert vmdk, but there is not enough disk space

H.c.K

Active Member
Oct 16, 2019
68
3
28
33
Hello, I have a vmware server and I'm migrating to pve. I have a 110GB virtual server. I will move it to Pve and convert it as raw, but I don't have enough space. Actually there is, but I don't know how to do it.

root@pve1:~# df -h
Filesystem Size Used Avail Use% Mounted on
udev 48G 0 48G 0% /dev
tmpfs 9.5G 994M 8.5G 11% /run
/dev/mapper/pve-root 94G 13G 77G 15% /
tmpfs 48G 37M 48G 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 48G 0 48G 0% /sys/fs/cgroup
/dev/fuse 30M 16K 30M 1% /etc/pve
tmpfs 9.5G 0 9.5G 0% /run/user/0
root@pve1:~# fdisk -l
Disk /dev/sda: 931.5 GiB, 1000148590592 bytes, 1953415216 sectors
Disk model: LOGICAL VOLUME
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 346CF8BF-8485-4527-B1CD-0F478B191C22

Device Start End Sectors Size Type
/dev/sda1 34 2047 2014 1007K BIOS boot
/dev/sda2 2048 1050623 1048576 512M EFI System
/dev/sda3 1050624 1953415182 1952364559 931G Linux LVM


Disk /dev/mapper/pve-swap: 8 GiB, 8589934592 bytes, 16777216 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/pve-root: 96 GiB, 103079215104 bytes, 201326592 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/pve-vm--100--disk--0: 200 GiB, 214748364800 bytes, 419430400 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 65536 bytes / 65536 bytes
Disklabel type: dos
Disk identifier: 0x3d4b1579

Device Boot Start End Sectors Size Id Type
/dev/mapper/pve-vm--100--disk--0-part1 * 2048 1026047 1024000 500M 7 HPFS/NTFS/exFAT
/dev/mapper/pve-vm--100--disk--0-part2 1026048 419428351 418402304 199.5G 7 HPFS/NTFS/exFAT


Disk /dev/mapper/pve-vm--101--disk--0: 50 GiB, 53687091200 bytes, 104857600 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 65536 bytes / 65536 bytes
Disklabel type: dos
Disk identifier: 0x59322cd2

Device Boot Start End Sectors Size Id Type
/dev/mapper/pve-vm--101--disk--0-part1 * 2048 2099199 2097152 1G 83 Linux
/dev/mapper/pve-vm--101--disk--0-part2 2099200 104857599 102758400 49G 8e Linux LVM


Disk /dev/mapper/pve-vm--102--disk--0: 100 GiB, 107374182400 bytes, 209715200 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 65536 bytes / 65536 bytes
Disklabel type: dos
Disk identifier: 0x000e2b7e

Device Boot Start End Sectors Size Id Type
/dev/mapper/pve-vm--102--disk--0-part1 * 2048 2099199 2097152 1G 83 Linux
/dev/mapper/pve-vm--102--disk--0-part2 2099200 209715199 207616000 99G 8e Linux LVM


Disk /dev/mapper/pve-vm--103--disk--0: 110 GiB, 118111600640 bytes, 230686720 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 65536 bytes / 65536 bytes

I will move the vmdk to "/ dev / mapper / pve-root" but there is not enough disk space. How can I convert?

After converting I think to mount and use the disk "dd if = <yourVMname> .raw | pv -s 110G | dd of = / dev / mapper / pve-vm - 103 - disk - 0".

I would appreciate if you can help as a command.

Edit:
I am waiting for your help in this matter. I search for other alternative routes. I temporarily mounted a nfs disk.
1613949777963.png
I will transfer the vmdk file here, convert it and mount it to the logical disk.

If I could not mount nfs disk, how else could I solve this? I want to learn. 2. Alternatively, on a windows server with no disk space problem, "https://cloudbase.it/qemu-img-windows/" using this tool to convert as .raw file. But since I don't have 110GB of space, that wouldn't solve my problem either.
 
Last edited:
Just FYI, you don't have 110 GB of _disk_ anywhere in these listings. All of the tmpfs volumes are actually RAM. So filling them completely may not be good for the rest of your system. What you do seem to have is an LVM pool. You didn't give the output of "lvs" so we can't tell how big it is or how much is free. But it must exist because your root and VM's are on /dev/mapper.

If your pool has available 110 GB + space for the input file you could create a temporary volume and use that to hold the input. You'd need to create the volume, make a filesystem on it and mount it somewhere, then copy the data.

If it is a normal Proxmox setup you probably have a thin pool called "data" where your virtual disks are located. That's where you'd get the space to create the temporary volume too. The commands you'd need would be lvs, lvcreate, mkfs, and mount.

Google is your friend from there :)

ETA: Be sure your LVM pool really does have enough space before you do this! If you fill it completely, bad things will happen.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!