Can't Migrate from ZFS to LVM-thin

jwsl224

Member
Apr 6, 2024
42
1
8
i know i should be able to do this, and various forum entries suggest the same, but for some reason i can't. i'm trying to migrate a vm from a zfs-local proxmox host, to a lvm-local proxmox host, both of which are on version 8.2.

i installed proxmox on the destination host as ext4, then went to the data center (after adding node) and added storage>lvm-thin.

this is the source node:
Code:
zfspool: nvme5
        pool nvme5
        content rootdir,images
        mountpoint /nvme5
        nodes MD72-HB2-1
        sparse 1

this is the destination node:
Code:
lvmthin: local-lvm
        thinpool data
        vgname pve
        content images,rootdir
        nodes r730xd-1

this is the VM i'm trying to migrate:
Code:
agent: 1
balloon: 2000
bios: seabios
boot: order=scsi0;ide0;net0
cores: 32
cpu: x86-64-v2-AES
ide0: none,media=cdrom
machine: pc-q35-8.1
memory: 8000
meta: creation-qemu=8.1.5,ctime=1723473166
name: ATB-Doc-5
net0: virtio=00:15:5a:08:f0:23,bridge=vmbr0,firewall=1,tag=3
numa: 0
ostype: win11
scsi0: nvme5:vm-107-disk-0,cache=unsafe,discard=on,iothread=1,size=50G,ssd=1
scsihw: virtio-scsi-single
smbios1: uuid=4e5e2054-9bd8-4f13-92d9-fb657d5b1480
sockets: 1
tpmstate0: local-zfs:vm-107-disk-0,size=4M,version=v2.0
vmgenid: 359e22b8-9723-4d45-8051-39cc6028f0e3

this is the error that comes up:
Code:
2024-09-04 14:21:54 starting migration of VM 107 to node 'r730xd-1' (10.1.100.103)
2024-09-04 14:21:54 found generated disk 'local-zfs:vm-107-disk-0' (in current VM config)
2024-09-04 14:21:54 found local disk 'nvme5:vm-107-disk-0' (attached)
2024-09-04 14:21:54 copying local disk images
2024-09-04 14:21:54 ERROR: storage migration for 'local-zfs:vm-107-disk-0' to storage 'local-lvm' failed - cannot migrate from storage type 'zfspool' to 'lvmthin'
2024-09-04 14:21:54 aborting phase 1 - cleanup resources
2024-09-04 14:21:54 ERROR: migration aborted (duration 00:00:01): storage migration for 'local-zfs:vm-107-disk-0' to storage 'local-lvm' failed - cannot migrate from storage type 'zfspool' to 'lvmthin'
TASK ERROR: migration aborted
 
hey @fiona. i always appreciate when the proxmox staff drops by to help out.
could you take for granted i am really new to this and help me make it make sense? the two nodes i am talking about are together in the same cluster. their only difference is their file systems. (the single only reason i didn't do ZFS on the second node is because it absolutely does not function without the raid controller) i can live-migrate storage locally on a node between file systems. am i not supposed to be able to do the same between nodes?

or, did i do something wrong when i set up the storage on the EXT4 installed node? because i can go back and redo it if there's something i can do to make live migrations possible.
 
The problem is that disk migration via the storage layer does not currently support ZFS <-> LVMthin. That would need to be implemented first. Live-migration uses a different mechanism (i.e. QEMU's drive mirror which does not have this limitation) for most disks, but special disks like TPM and cloud-init are always migrated via the storage layer.

You'd need to change the storage of the cloud-init disk or TPM state and use a separate target-storage for it. On the CLI you can specify a mapping like --targetstorage myzfs,local:local which means migrating all disks to the storage myzfs, except disks on the local storage to the local storage.

EDIT: also mention TPM in second paragraph
 
Last edited:
The problem is that disk migration via the storage layer does not currently support ZFS <-> LVMthin. That would need to be implemented first. Live-migration uses a different mechanism (i.e. QEMU's drive mirror which does not have this limitation) for most disks, but special disks like TPM and cloud-init are always migrated via the storage layer.

You'd need to change the storage of the cloud-init disk and use a separate target-storage for it. On the CLI you can specify a mapping like --targetstorage myzfs,local:local which means migrating all disks to the storage myzfs, except disks on the local storage to the local storage.
but but! I don't have a cloudinit disk! I am in fact trying to do a live migration of a regular single disk windows machine between two cluster nodes. unless cloudinit means something I don't understand..
 
But you do have a TPM state? The migration log shows
Code:
2024-09-04 14:21:54 found generated disk 'local-zfs:vm-107-disk-0' (in current VM config)
 
It looks like you have a hard disk on 'nvme5' and the tpm on 'local' storage.

Go to the VM. Go to the hardware section. Click on the TPM disk and select Disk Action / Move Storage. Most new users don't realize you can move storage for a VM per disk from the GUI.

Stick with ZFS storage if you can, it's already thin allocated and is much better than EXT4.
 
But you do have a TPM state? The migration log shows
ok, so it's the TPM disk that's getting me here?

It looks like you have a hard disk on 'nvme5' and the tpm on 'local' storage.
ok. proxmox must've done that automatically. i just recently started to add win11 vm's so i didn't realize that. that's what's getting me? moving them to the same pool will solve it?

Stick with ZFS storage if you can, it's already thin allocated and is much better than EXT4.
believe me when i tell you, i was dragged kicking and screaming into EXT4. i'll create a thread shortly to see if anybody at all has any idea why zfs hates this server (r730xd) so much
 
ok, so it's the TPM disk that's getting me here?


ok. proxmox must've done that automatically. i just recently started to add win11 vm's so i didn't realize that. that's what's getting me? moving them to the same pool will solve it?
Did that fix it for you?
believe me when i tell you, i was dragged kicking and screaming into EXT4. i'll create a thread shortly to see if anybody at all has any idea why zfs hates this server (r730xd) so much
I run ZFS on R730xd's just fine. Just make sure you take each physical disk out of RAID mode in the RAID controller first. Quit messing around with EXT4 and figure out your ZFS issue. Only CEPH beats ZFS, but thats for a bigger 3+ node cluster.

I recommend you create a ZFS pool on both servers in the cluster with the same name (ex fastpool for SSDs). That way when you migrate, it will already have the same pool name on each node.

You can always do a shutdown style backup then restore of a VM if you are having trouble migrating.

Added Warning - Don't change the disk mode in the RAID controller until you have all the data off and are ready to rebuild the ZFS pool on those drives.
 
Last edited:
Quit messing around with EXT4 and figure out your ZFS issue
:cool::cool: if only if only! if you have any ideas at all how to fix this, i am listening with eyes and ears. i opened another thread here were i posted some actual test at the behest of some zfs pros. they haven't found anything obviously wrong with it though.

some noobs are trying to tell me that the test drives i'm using are just that slow. and i call fake news. there's no way any modern drive will peg out at 5 MB/s, nor at 60 MB/s.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!