Working With ZFS Virtual Disks

rbeard.js

Member
Aug 11, 2022
53
2
13
Hi All,

We are just getting started moving all out machines to a proxmox VE.
We are using VMware converter on our machines to grab a vmdk disk .

When I was testing out proxmox and learning how to use it, I had the storage setup as an LVM-thin as I didnt have enough disks on hand to do a RaidZ2 like we were going to in production. With the LVM-thin setup, I was able to create a virtual machine with the parameters that matched the machine I was converting then run a qemu convert command on my vmdk and replace the disk that was created during the VM setup process. I could then boot and be good to go. I could also copy the virtual disk off the server as a backup or to move to a test box.

Clearly, this process is much different on zfs. Can someone explain like Im five what the process is to achieve the same goal as I was doing before with LVM-thin? I have a bunch of machines I made vmdk disks, converted them to RAW, and I need to import them as vms into my zfs based proxmox. I would also like to learn how to pull a copy of my virtual disk manually if I need to for some reason.
I know there is the backup and restore options that creates VMA files that I can decompress and convert. That only helps me though in normal operation and I would like to know how to do this manually. It also doesnt help me get the VMs into the machine in the first place.

I understand that I need to use send, recv, and snapshot commands for this but my syntax must be off or something because I cannot seem to make it work right.

The current machine I am trying to import shows as vm-106-disk-0. My zfs pool is named datapool. Also I have /dev/datapool and /dev/zvol/datapool is this normal as they seem to contain the same contents. Also, some of my vms have multiple disks. I assume the procedures are the same and I need to just copy both out separately.

Im sorry for my noobishness and I very much appreciate the help
 
  • Like
Reactions: semanticbeeng
You can use qm disk import TargetVMID /path/to/vmdk TargetStorageID (see here: https://pve.proxmox.com/pve-docs/qm.1.html) to import import a disk to a VM. It will then convert it to raw and store it as a zvol in case your target storage is of type zfspool.

Whats your ZFS structure reported by zfs list? "datapool/vm-106-disk-0"? Then you could try something like this to store a zvol into a file:
Code:
zfs snapshot datapool/vm-106-disk-0@copy
zfs send datapool/vm-106-disk-0@copy | gzip > /path/to/stoire/yourbackup.gz
zfs destroy datapool/vm-106-disk-0@copy

When working with zfs you will have to use the "zfs" (to manipulate datasets and zvols) and "zpool" (to manipulate a pool and its vdevs) commands.

Make yourself familiar with the most used commands: zpool status, zpool scrub, zpool resilver, zpool create, zpool set, zpool get, zpool import, zpool export, zpool list, zpool add, zpool replace, zfs list, zfs set, zfs get, zfs create, zfs destroy, zfs snapshot, zfs clone, zfs send, zfs recv, zfs rename.
You will need them sooner or later.

You usually don't work on file level, like your normally would do with ls, cp, mv and so on. You only use them when you want to work with files stored on a dataset. Not whwn working with ZFS itself.
 
Last edited:
You can use qm disk import TargetVMID /path/to/vmdk TargetStorageID (see here: https://pve.proxmox.com/pve-docs/qm.1.html) to import import a disk to a VM. It will then convert it to raw and store it as a zvol in case your target storage is of type zfspool.
So here is my zfs list output

1672436326993.png

And here is the command I tried to import my .raw disk into my datapool. It didnt seem to recognize the qm disk command. I see a move-disk command but Im not sure if that will give me the results Im looking for. Also is my syntax on the rest of the qm command correct? The disk was preconverted and is just in a random folder named datapool. My zfs storage was also named datapool.
Is there something I am doing wrong here or addition addons needed?

1672437121968.png

The other command to pull my disks back out did work just fine though! However, is using gzip necessary to the process, or could I just more the .raw file around? If not, would I just need to decompress the newly made .gz file?
EDIT: Well I tried decompressing it and its no longer a .raw file. So I guess is there a way to pull it as a .raw without compression?

And I will try and familiarize myself with those commands, thank you!
 
Last edited:
So here is my zfs list output

View attachment 45052

And here is the command I tried to import my .raw disk into my datapool. It didnt seem to recognize the qm disk command. I see a move-disk command but Im not sure if that will give me the results Im looking for. Also is my syntax on the rest of the qm command correct? The disk was preconverted and is just in a random folder named datapool. My zfs storage was also named datapool.
Is there something I am doing wrong here or addition addons needed?

View attachment 45053
Not sure why it complains. but you could also try it with its alias "qm importdisk":
qm importdisk
An alias for qm disk import.
And you need to use the StorageID of your Targer storage. Not its mountpoint. The storageID is the name you see for the storage in the webUI.
If you got a VM with VMID 106 and a vmdk image file at /tmp/myimage.vmdk and you want to import that image as a zvol to a storage of type "zfspool" named "MyStorage" you would need to run qm disk import 106 /tmp/myimage.vmdk MyStorage or qm importdisk 106 /tmp/myimage.vmdk MyStorage
 
Not sure why it complains. but you could also try it with its alias "qm importdisk":
Worked Like a charm! Thank you so much for the help!

I edited a small note in my last post about the zfs send commands. Is the usage of gzip necessary or can I pull the files as their original .raw file type?



Im going to do some reading up on disaster recovery processes before we need them with zfs. Is it really as easy as reslivering a new disk after replacing one in either my boot mirror or main zfs pool?
 
Is the usage of gzip necessary or can I pull the files as their original .raw file type?
Not needed, but will save space and transfer time.

Is it really as easy as reslivering a new disk after replacing one in either my boot mirror or main zfs pool?
You can't just swap the disks. You will also have to partition it by cloning the partition table and then write a bootloader to it. See the chapter "Changing a failed bootable device": https://pve.proxmox.com/wiki/ZFS_on_Linux#_zfs_administration
 
Not needed, but will save space and transfer time.
I suppose some further explanation to what I'm trying to do might help, I'm sorry.

I have 3 vms running in our proxmox test environment on zfs that I would like to export the raw disk file from so we can import it into the finalized server. Im also planning to keep that raw image file on some kind of cold storage just in case.

Maybe there is an easier way of doing this but I figured learning the manual process could prove valuable.
So the gzip way works but when I unzip, the flow has no file types.
So I'm looking to use that zfs send command to move the .raw image. I was getting errors when I tried running the command without the gzip. It was complaining that my target was a directory. I might be messing up they syntax
I apologize for asking so many questions
 
I have 3 vms running in our proxmox test environment on zfs that I would like to export the raw disk file from so we can import it into the finalized server. Im also planning to keep that raw image file on some kind of cold storage just in case.
Why not just backup the VMs with VZDump and then restore them on the other server using the webUI? That way you also have a backup of the required VM config file together with the virtual disks. And when restoring it you can restore it to any type of storage, not just ZFS. And way less chance of user error.

So the gzip way works but when I unzip, the flow has no file types.
Yes, there are no files. "zfs send" will pipe its output to the gzip archive which just stores a blocklevel stream. You then need to pipe that data stream back from the gzip archive to "zfs recv" so it will create a new zvol with the data of that stream.
I was getting errors when I tried running the command without the gzip. It was complaining that my target was a directory. I might be messing up they syntax
Again, don't think of files and directories when working with ZFS. When working with virtual VM disks in raw format, there is no filesystem involved at all. And if you are woring with a directory storage on top of a ZFS pool to store raw files, you should consider creating a "zfspool" type strorage instead. That filesystem that the directory storage needs is just additional overhead that could be avoided when working with zvols.
And when working with raw files on an directory storage you also can't make use of ZFS features (so for example no snapshots).
 
Last edited:
  • Like
Reactions: semanticbeeng
Why not just backup the VMs with VZDump and then restore them on the other server using the webUI? That way you also have a backup of the required VM config file together with the virtual disks. And when restoring it you can restore it to any type of storage, not just ZFS. And way less chance of user error.
Well yeah that was exactly what I was trying to do by pulling the .raw. I could convert it to any disk type and use on proxmox or esxi or whatever.
Im looking at the VZDump manual. Is it just vzdump 106 /storage/location and add compression or change file name if desired?
 
You can either use the webUI or use the CLI: https://pve.proxmox.com/pve-docs/vzdump.1.html
You would need to specify the storage location with "--dumpdir /storage/location".
But in case you want to import it on a non PVE server later, you might need to extract that backup archive as it contains more than just the raw image of a single virtual disk. There might be multiple virtual disks in it as well as the required VM configs that are needed in case you want to restore it:
https://pve.proxmox.com/wiki/VMA
https://forum.proxmox.com/threads/howto-extract-pve-backup-from-outsite-on-a-debian-machine.60468/
 
Last edited:
  • Like
Reactions: semanticbeeng
PS: should I move this to a separate topic?

Also interested in these uses cases of lower level VM storage management with `ZFS`.
Have studied all of the above and other sources.

https://pve.proxmox.com/pve-docs/chapter-sysadmin.html
https://pve.proxmox.com/wiki/ZFS:_Switch_Legacy-Boot_to_Proxmox_Boot_Tool
https://pve.proxmox.com/wiki/ZFS_on_Linux
https://pve.proxmox.com/wiki/ZFS:_S...xmox_Boot_Tool#Switching_to_proxmox-boot-tool
https://pve.proxmox.com/wiki/VMA

The level of storage management based on ZFS from the Debian Bookworm Root on ZFS installation is great because we can control files/data inside the VM itself.

See
Code:
zfs create                     rpool/home
zfs create -o mountpoint=/root rpool/home/root
chmod 700 /mnt/root
zfs create -o canmount=off     rpool/var
zfs create -o canmount=off     rpool/var/lib
zfs create                     rpool/var/log
zfs create                     rpool/var/spool

Then VM admin can ZFS snapshot and manage state of various individual datasets - /home, /etc, and application data/state - at fine grained level, instead of at the VM bulk level like VMA does.

The "OS and data separation" makes sense also inside VMs.
So I want to separate OS, user files and big data and manage snapshots and backups at fine level.

The proxmox VMA and boot management features are compelling ...

How to best do that to achieve low level control over VM storage (with ZFS) without conflicting or reinventing what proxmox does?

What is the best way to apply the ZFS setup from the native Debian setup to both proxmox itself and to the VMs ("root on ZFS")?

If I apply the Debian native ZFS setup will proxmox play along with it? (not try to re-partition but also use ZFS properly?)

I realize that the benefits of proxmox-boot-tool by proxmox may not be achievable this way but willing to trade it off for control over host OS (including proxmox) and the VMs state managed separately, not in same pool anyway.

Missing any big point in this thought process?

Also studied
* https://pve.proxmox.com/wiki/Passthrough_Physical_Disk_to_Virtual_Machine_(VM)
* https://forum.proxmox.com/threads/working-with-zfs-virtual-disks.120276/
* https://forum.proxmox.com/threads/accessing-a-host-partition-from-inside-a-container.54560/
* https://forum.proxmox.com/threads/container-with-physical-disk.42280/#post-203292

Is there a way to access ZFS datasets from host nativelly in the same above?

Related
* https://forum.proxmox.com/threads/add-ssd-zfs-pool-with-vms-to-new-proxmox-server.116569/post-505551
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!