Convert VM on ZFS to Vmware

Fafa24

New Member
Nov 9, 2023
7
0
1
Hi,

This is my first post on this board. Please be nice to me. :)

I have a single Proxmox server in my homelab with zfs. I want to convert a VM to VMware as a test.

For ZFS I'm not sure which VM disk I need to convert.

find / | grep vm-101-disk-0 - outputs

root@proxmox1:/dev/zvol/datastore# find / | grep vm-101-disk-0
/dev/zvol/datastore/vm-101-disk-0-part3
/dev/zvol/datastore/vm-101-disk-0-part1
/dev/zvol/datastore/vm-101-disk-0-part2
/dev/zvol/datastore/vm-101-disk-0
/dev/datastore/vm-101-disk-0-part3
/dev/datastore/vm-101-disk-0-part1
/dev/datastore/vm-101-disk-0-part2
/dev/datastore/vm-101-disk-0
/run/udev/links/zvol\x2fdatastore\x2fvm-101-disk-0-part1
/run/udev/links/zvol\x2fdatastore\x2fvm-101-disk-0-part1/b230:1
/run/udev/links/datastore\x2fvm-101-disk-0-part1
/run/udev/links/datastore\x2fvm-101-disk-0-part1/b230:1
/run/udev/links/datastore\x2fvm-101-disk-0-part2
/run/udev/links/datastore\x2fvm-101-disk-0-part2/b230:2
/run/udev/links/zvol\x2fdatastore\x2fvm-101-disk-0-part2
/run/udev/links/zvol\x2fdatastore\x2fvm-101-disk-0-part2/b230:2
/run/udev/links/zvol\x2fdatastore\x2fvm-101-disk-0-part3
/run/udev/links/zvol\x2fdatastore\x2fvm-101-disk-0-part3/b230:3
/run/udev/links/datastore\x2fvm-101-disk-0-part3
/run/udev/links/datastore\x2fvm-101-disk-0-part3/b230:3
/run/udev/links/zvol\x2fdatastore\x2fvm-101-disk-0
/run/udev/links/zvol\x2fdatastore\x2fvm-101-disk-0/b230:0
/run/udev/links/datastore\x2fvm-101-disk-0
/run/udev/links/datastore\x2fvm-101-disk-0/b230:0
root@proxmox1:/dev/zvol/datastore#

Would I need to convert the disk in bold? I understand in ZFS it is not a disk file.

qemu-img convert -f raw r/dev/zvol/datastore/vm-101-disk-0 -O vmdk zappix-neu.vmdk

If everything is correct, will the qemu-img command convert the part files too?

Thanks a lot.
 
Would I need to convert the disk in bold?
Yes.

I understand in ZFS it is not a disk file.
Technically, it's a block device (which is also a file, because everything is a file in Linux).

qemu-img convert -f raw r/dev/zvol/datastore/vm-101-disk-0 -O vmdk zappix-neu.vmdk

If everything is correct, will the qemu-img command convert the part files too?
Yes, that looks ok.
 
Technically, it's a block device (which is also a file, because everything is a file in Linux).

I would be careful about these assumptions with ZFS, not everything that is a file is block device. ;)

Code:
# zfs list -t filesystem -o name,used
NAME                      USED
rpool                    46.7G
rpool/ROOT               2.02G
rpool/ROOT/pve-1         2.02G
rpool/data                 96K
rpool/subvol-101-disk-0   936M
rpool/subvol-102-disk-0  6.81G
rpool/subvol-105-disk-0  9.04M
rpool/subvol-106-disk-0   443M

# zfs list -t volume -o name,used
NAME         USED
rpool/swap  34.0G
root@pve5:~#

There's datasets that are filesystem type and there's zvols which are actually block devices. If it was block device in the regular sense of the word a swap could be created on it. PVE does not create ZVOLs. ZVOLs are linked from the pool's subpath in /dev and mixed inbetween other datasets.

EDIT: Mistakenly mixed up statement on VMs and CTs in PVE.

For the more curious: https://www.youtube.com/watch?v=BIxVSqUELNc

But take my post as a footnote only.
 
Last edited by a moderator:
I would be careful about these assumptions with ZFS, not everything that is a file is block device. ;)
It's not an assumption. It's a fact that the OP's bold file is a block device. Datasets are not represented in /dev/zvol, only zvols ... hence the name.
 
It's not an assumption. It's a fact that the OP's bold file is a block device. Datasets are not represented in /dev/zvol, only zvols ... hence the name.

This statement is correct, however I found the terse reaction on the "file" aspect of it a bit quirky. The OP, I believed, wanted to confirm it's not a VM image file (in the casual sense of the word) he was dealing with.

It is also true that it was a block device he was getting converted because everything in /dev/zvol is.

I also agree that everything in Linux is represented by a file descriptor.

I am not sure statements with broadly used terms like "file" or "ZFS" are self-explanatory. I do not know if OP was interested, but others might find this post helpful later on (including with our nitpicking).

EDIT: Went back and checked what caused the reaction, fixed the wrong statement, thanks for reaction.
 
Last edited by a moderator:
qemu-img convert -f raw r/dev/zvol/datastore/vm-101-disk-0 -O vmdk zappix-neu.vmdk
To clarify - Will the qemu-img convert command recognize there are Part1, Part2 and Part3?
 
Thanks for confirming @LnxBil - I assumed so, but was not sure

In case you wondered why that is:
Code:
# zfs list -o name,type

NAME                     TYPE
rpool                    filesystem
rpool/ROOT               filesystem
...
rpool/swap               volume
rpool/vm-201-disk-0      volume
Code:
# ls -la /dev/rpool/vm-201-disk*

lrwxrwxrwx 1 root root 7 Nov 11 17:01 /dev/rpool/vm-201-disk-0 -> ../zd16
lrwxrwxrwx 1 root root 9 Nov 11 17:01 /dev/rpool/vm-201-disk-0-part1 -> ../zd16p1
lrwxrwxrwx 1 root root 9 Nov 11 17:01 /dev/rpool/vm-201-disk-0-part2 -> ../zd16p2
lrwxrwxrwx 1 root root 9 Nov 11 17:01 /dev/rpool/vm-201-disk-0-part5 -> ../zd16p5
Code:
# ls -la /dev/zvol/rpool/vm-201-disk*

lrwxrwxrwx 1 root root 10 Nov 11 17:01 /dev/zvol/rpool/vm-201-disk-0 -> ../../zd16
lrwxrwxrwx 1 root root 12 Nov 11 17:01 /dev/zvol/rpool/vm-201-disk-0-part1 -> ../../zd16p1
lrwxrwxrwx 1 root root 12 Nov 11 17:01 /dev/zvol/rpool/vm-201-disk-0-part2 -> ../../zd16p2
lrwxrwxrwx 1 root root 12 Nov 11 17:01 /dev/zvol/rpool/vm-201-disk-0-part5 -> ../../zd16p5
Code:
# ls -la /dev/zd16*

brw-rw---- 1 root disk 230, 16 Nov 11 17:01 /dev/zd16
brw-rw---- 1 root disk 230, 17 Nov 11 17:01 /dev/zd16p1
brw-rw---- 1 root disk 230, 18 Nov 11 17:01 /dev/zd16p2
brw-rw---- 1 root disk 230, 21 Nov 11 17:01 /dev/zd16p5
Code:
# ls -la /sys/devices/virtual/block/zd16/

total 0
drwxr-xr-x 11 root root    0 Nov 11 16:45 .
drwxr-xr-x 12 root root    0 Nov  5 02:52 ..
-r--r--r--  1 root root 4096 Nov 11 17:15 alignment_offset
lrwxrwxrwx  1 root root    0 Nov 11 17:15 bdi -> ../../bdi/230:16
-r--r--r--  1 root root 4096 Nov 11 17:15 capability
-r--r--r--  1 root root 4096 Nov 11 17:01 dev
-r--r--r--  1 root root 4096 Nov 11 17:15 discard_alignment
-r--r--r--  1 root root 4096 Nov 11 17:15 diskseq
-r--r--r--  1 root root 4096 Nov 11 17:15 events
-r--r--r--  1 root root 4096 Nov 11 17:15 events_async
-rw-r--r--  1 root root 4096 Nov 11 17:15 events_poll_msecs
-r--r--r--  1 root root 4096 Nov 11 17:15 ext_range
-r--r--r--  1 root root 4096 Nov 11 17:15 hidden
drwxr-xr-x  2 root root    0 Nov 11 17:01 holders
-r--r--r--  1 root root 4096 Nov 11 17:15 inflight
drwxr-xr-x  2 root root    0 Nov 11 17:01 integrity
drwxr-xr-x  2 root root    0 Nov 11 17:01 power
drwxr-xr-x  2 root root    0 Nov 11 16:45 queue
-r--r--r--  1 root root 4096 Nov 11 17:15 range
-r--r--r--  1 root root 4096 Nov 11 17:15 removable
-r--r--r--  1 root root 4096 Nov 11 17:15 ro
-r--r--r--  1 root root 4096 Nov 11 16:45 size
drwxr-xr-x  2 root root    0 Nov 11 17:01 slaves
-r--r--r--  1 root root 4096 Nov 11 17:15 stat
lrwxrwxrwx  1 root root    0 Nov 11 16:45 subsystem -> ../../../../class/block
drwxr-xr-x  2 root root    0 Nov 11 17:01 trace
-rw-r--r--  1 root root 4096 Nov 11 16:45 uevent
drwxr-xr-x  5 root root    0 Nov 11 17:01 zd16p1
drwxr-xr-x  5 root root    0 Nov 11 17:01 zd16p2
drwxr-xr-x  5 root root    0 Nov 11 17:01 zd16p5
Code:
# cat /lib/udev/rules.d/60-zvol.rules

# Persistent links for zvol
#
# persistent disk links: /dev/zvol/dataset_name
# also creates compatibility symlink of /dev/dataset_name

KERNEL=="zd*", SUBSYSTEM=="block", ACTION=="add|change", PROGRAM=="/lib/udev/zvol_id $devnode", SYMLINK+="zvol/%c %c"
Code:
# head /lib/udev/rules.d/60-persistent-storage.rules

# do not edit this file, it will be overwritten on update

# persistent storage links: /dev/disk/{by-id,by-uuid,by-label,by-path}
# scheme based on "Linux persistent device names", 2004, Hannes Reinecke <hare@suse.de>

ACTION=="remove", GOTO="persistent_storage_end"
ENV{UDEV_DISABLE_PERSISTENT_STORAGE_RULES_FLAG}=="1", GOTO="persistent_storage_end"

SUBSYSTEM!="block|ubi", GOTO="persistent_storage_end"
KERNEL!="loop*|mmcblk*[0-9]|msblk*[0-9]|mspblk*[0-9]|nvme*|sd*|sr*|vd*|xvd*|bcache*|cciss*|dasd*|ubd*|ubi*|scm*|pmem*|nbd*|zd*", GOTO="persistent_storage_end"
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!