Migrating zfs pool from non-proxmox to proxmox


New Member
Jul 15, 2023
I'm migrating a zfs pool from a non-proxmox managed system to a proxmox managed system.
Here is a bit of a two parter question
1. how to reliably pass a disc to the VM so that ZFS can use it again?
I have been able to forward a whole disk to a VM (like my boot drive) and a 2-disk mirror, to the VM with no issue. It just took it as is. However, with my 5 disk array, 2 disks don't show up. In the host, I can import the drives no problem but the guest system isn't. In addition, many of the drives overall report generic names like (sda) as opposed to the serial/uuid/whatever more unique when i do a `zpool status`

2. How can I change a ZFS pool that was originally managed by a bare metal install to one that is proxmox managed?
I can import an array. cool. But I can't have any of my VMs or containers use it as it is not in the GUI.
1. ZFS keeps a record of which host OS last imported an array. If the array imports properly on the host but not the VM, try exporting it from the host first. You may also try zfs import -f in case of emergency but otherwise not recommended.

2. If the ZFS pool imports well, you can edit /etc/pve/storage.cfg to manually expose it to PVE. An example of mine is like this:

zfspool: local-zfs
        pool rpool/pve
        content images,rootdir
        nodes pve-1
        sparse 1
So I got it imported no problem. but

root@UbuntuSensei:/etc/pve# zfs list
NAME                    USED  AVAIL     REFER  MOUNTPOINT
z-media                5.82T  10.2T      170K  none
z-media/ampache        5.19T  10.2T     5.19T  /ampache
z-media/share           602G  10.2T      602G  /smbshares/share
z-media/vm-151-disk-0  43.2G  10.2T     99.4K  -      <--me testing to see what the config would look like

zfspool: z-media
        pool z-media
        mountpoint /ampache
        nodes UbuntuSensei
        sparse 1

my guest vm.conf
this was a modified one where i created the config so i knew what formatted it wanted and i changed from z-media:vm-151-disk-0 to z-media:ampache and updated the size (I CAN use 17T instead right?). It's set as read only so i don't trash any data while I'm figuring this out.
scsi7: z-media:ampache,backup=0,iothread=1,replicate=0,ro=1,size=17000G

Attempting to start
TASK ERROR: unable to parse zfs volume name 'ampache'

I also noticed that I'm not sure where these entries are showing up from. I can't seem to find a cfg that has only this vm-161-disk-0 listed1689450475088.png

Appreciate the help so far. It's got me pointed in different directions to look at now.
For the importing.... I noticed that some of hte drives are no longer "described" (not sure on a better word)

NAME                                   STATE     READ WRITE CKSUM
        z-media                                ONLINE       0     0     0
          raidz2-0                             ONLINE       0     0     0
            ata-ST14000NM001G-2KJ103_ZTM0MH4K  ONLINE       0     0     0
            ata-ST14000NM001G-2KJ103_ZTM0LGCL  ONLINE       0     0     0
            sdf                                ONLINE       0     0     0
            sdg                                ONLINE       0     0     0
            sdh                                ONLINE       0     0     0

Any way to remedy the sdf,g,h?
TASK ERROR: unable to parse zfs volume name 'ampache'

Your ZFS volume z-media/ampache is a dataset, not a ZVOL volume that PVE is expecting. You should give it a ZVOL to work with, like zfs create -s -V 64G z-media/ampache.

For the extra disk, it's because PVE scans the given storage source for valid names. It picks up anything with the pattern vm-<id>-<whatever> from the source storage.
dataset, not a ZVOL volume
Ok so Datasets are passed to containers and zvol is passed to VMs... i think? is this correct? I have data inside that dataset and I don't have a swap space available to put it while I recreate it so I can't do that. I'm only trying to pass it to a VM so I can continue creating services as containers until i'm ready to kill the vm.

A seperate zfs questions, How can I tell that it is a dataset and not a ZVOL? because in the zfs list they look the same.

I've switched to now trying to pass it as is to a container instead but seem to be having trouble with that as well. same problem of can't parse. The only way i seem to be able to do this is to bindmount the directory and change permissions. I guess this might be my only way to keep the dataset and data as is to give to a container?

picks up anything with the pattern vm-<id>-<whatever>
Got it. I was suspecting this but wasn't sure. thanks!
so Datasets are passed to containers and zvol is passed to VMs
I don't have a swap space available to put it while I recreate it
Normally you just create another dataset for your VM/CT, like z-media/ampache2. If you really need the exact name, there's zfs rename to the rescue.
How can I tell that it is a dataset and not a ZVOL?
In the zfs list output, if the mountpoint is a single dash, it's a ZVOL (obviously cannot be mounted by ZFS). If it's anything else (valid path, none or legacy), it's a dataset. Althrough the "canonical" answer is zfs get type pool0/something where the result is either "volume" or "filesystem" (I might have mixed the term "filesystem" and "dataset").
same problem of can't parse
I recommend creating a new dataset through PVE and see if there's anything different.

Also I recommend against using ZFS datasets with existing sub-sets for PVE. I would create an empty set like pool0/pve and dedicate it to PVE.


The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!