Mirgation Vm from ZFS to LVM (not backup/restore)

Ivan Gersi

Renowned Member
May 29, 2016
83
7
73
54
There are several topics in this forum about this issue but I can`t be able migrate VM from zfs node to lvm node.
I`ve tried offline, online, move VM`s disk from zfs to local(dir) with no success.
This is my storage.
root@pve1:~# pvesm status
Name Type Status Total Used Available %
NAS cifs active 3841408272 865708736 2975699536 22.54%
local dir active 3739953920 63449984 3676503936 1.70%
local-lvm lvmthin disabled 0 0 0 N/A
zfs-local zfspool active 3707678684 31174720 3676503964 0.84%
Local-lvm is on pve2, pve1 has zfs FS.
root@pve1:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 3.6T 0 disk
├─sda1 8:1 0 1007K 0 part
├─sda2 8:2 0 512M 0 part
└─sda3 8:3 0 3.6T 0 part
sdb 8:16 0 3.6T 0 disk
├─sdb1 8:17 0 1007K 0 part
├─sdb2 8:18 0 512M 0 part
└─sdb3 8:19 0 3.6T 0 part
sr0 11:0 1 1024M 0 rom
zd0 230:0 0 500G 0 disk
├─zd0p1 230:1 0 549M 0 part
└─zd0p2 230:2 0 499.5G 0 part

I`ve read in this forum there can be migrate VM from zfs to LVM but I can`t do it.
VM is in raw, because lvm don`t like qcow2.
VM is online in this case.
root@pve1:~# qm migrate 100 pve2 --targetstorage local-lvm --with-local-disks --online
2023-05-11 21:52:05 starting migration of VM 100 to node 'pve2' (172.16.0.253)
2023-05-11 21:52:05 found local disk 'local:100/vm-100-disk-0.raw' (in current VM config)
2023-05-11 21:52:05 found local disk 'zfs-local:vm-100-disk-0' (via storage)
2023-05-11 21:52:05 copying local disk images
2023-05-11 21:52:05 ERROR: storage migration for 'zfs-local:vm-100-disk-0' to storage 'local-lvm' failed - cannot migrate from storage type 'zfspool' to 'lvmthin'
2023-05-11 21:52:05 aborting phase 1 - cleanup resources
2023-05-11 21:52:05 ERROR: migration aborted (duration 00:00:01): storage migration for 'zfs-local:vm-100-disk-0' to storage 'local-lvm' failed - cannot migrate from storage type 'zfspool' to 'lvmthin'
VM is offline in this case.
root@pve1:~# qm migrate 100 pve2 --targetstorage local-lvm --with-local-disks --online
VM isn't running. Doing offline migration instead.
2023-05-11 20:01:55 starting migration of VM 100 to node 'pve2' (172.16.0.253)
2023-05-11 20:01:55 found local disk 'local:100/vm-100-disk-0.raw' (in current VM config)
2023-05-11 20:01:55 found local disk 'zfs-local:vm-100-disk-0' (via storage)
2023-05-11 20:01:55 copying local disk images
2023-05-11 20:01:56 Logical volume "vm-100-disk-0" created.
2023-05-11 21:42:33 131072000+0 records in
2023-05-11 21:42:33 131072000+0 records out
2023-05-11 21:42:33 536870912000 bytes (537 GB, 500 GiB) copied, 6037 s, 88.9 MB/s
2023-05-11 21:42:35 708+53954029 records in
2023-05-11 21:42:35 708+53954029 records out
2023-05-11 21:42:35 536870912000 bytes (537 GB, 500 GiB) copied, 6037.98 s, 88.9 MB/s
2023-05-11 21:42:35 successfully imported 'local-lvm:vm-100-disk-0'
2023-05-11 21:42:35 volume 'local:100/vm-100-disk-0.raw' is 'local-lvm:vm-100-disk-0' on the target
2023-05-11 21:42:35 ERROR: storage migration for 'zfs-local:vm-100-disk-0' to storage 'local-lvm' failed - cannot migrate from storage type 'zfspool' to 'lvmthin'
2023-05-11 21:42:35 aborting phase 1 - cleanup resources
2023-05-11 21:42:36 ERROR: migration aborted (duration 01:40:42): storage migration for 'zfs-local:vm-100-disk-0' to storage 'local-lvm' failed - cannot migrate from storage type 'zfspool' to 'lvmthin'
migration aborted
Ok I can use classic backup/restore but I`m curious...there is a possibility to migrate VM online/offline?
 
Proxmox give no overview what is possible. There exist a simple Rule if you wanna migrate a local disk -> the source and destination datastore must have the same label and have to be from the same type. Thats why u get this error.
U can make it via the manual way but this is for experts.

1. Create LVM disk with same size. Possible from cli, i think.
2. Copy manual the data from source blockdevice to destination blockdevice like shown here: https://serverfault.com/questions/1...ing-ssh-to-remote-location-with-only-60-gb-of
3. detach zfs disk from source
4. migrate vm to node 2
5. add lvm disk to vm

The interessting is that proxmox has no problem doing it locally.
 
Last edited:
Proxmox give no overview what is possible.
There are normally no restrictions and I cannot think of any. Maybe there is something wrong the VM itself? In the first log there are two disks found. I did a lot of storage migrations with all kinds of storages and I never encountered such a behaviour as you're describing. What about the status of your lvm thin?
 
There are normally no restrictions
Where did you get that idea? Your first disks migrated succesfully because you have on both hosts lvm storage. Create on your 2. node an zfs storage with same name and u wont get any error.
 
The question really is what causes error "cannot migrate from storage type".

Lets find it:
Code:
/usr/share/perl5/PVE# grep -R "cannot migrate from storage type"
Storage.pm:    die "cannot migrate from storage type '$scfg->{type}' to '$tcfg->{type}'\n" if !@formats;

Code:
  my @formats = volume_transfer_formats($cfg, $volid, $target_volid, $opts->{snapshot}, $opts->{base_snapshot}, $opts->{with_snapshots});
    die "cannot migrate from storage type '$scfg->{type}' to '$tcfg->{type}'\n" if !@formats;

Code:
sub volume_transfer_formats {
    my ($cfg, $src_volid, $dst_volid, $snapshot, $base_snapshot, $with_snapshots) = @_;
    my @export_formats = volume_export_formats($cfg, $src_volid, $snapshot, $base_snapshot, $with_snapshots);
    my @import_formats = volume_import_formats($cfg, $dst_volid, $snapshot, $base_snapshot, $with_snapshots);
    my %import_hash = map { $_ => 1 } @import_formats;
    my @common = grep { $import_hash{$_} } @export_formats;
    return @common;
}

If no one else picks this up, I may look into "decyphering" it next week.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Proxmox give no overview what is possible. There exist a simple Rule if you wanna migrate a local disk -> the source and destination datastore must have the same label and have to be from the same type. Thats why u get this error.
U can make it via the manual way but this is for experts.

1. Create LVM disk with same size. Possible from cli, i think.
2. Copy manual the data from source blockdevice to destination blockdevice like shown here: https://serverfault.com/questions/1...ing-ssh-to-remote-location-with-only-60-gb-of
3. detach zfs disk from source
4. migrate vm to node 2
5. add lvm disk to vm

The interessting is that proxmox has no problem doing it locally.
There is problem the same label and same type because I tried to make local on the bot nodes but one node has zfs type and 2nd lvm and migration not worked.
I `m only curious now, because backup/restore is more effective for me now (backup (21GB zstd about 5min, restore the same) vs migration about 90min.
 
The question really is what causes error "cannot migrate from storage type".

Lets find it:
Code:
/usr/share/perl5/PVE# grep -R "cannot migrate from storage type"
Storage.pm:    die "cannot migrate from storage type '$scfg->{type}' to '$tcfg->{type}'\n" if !@formats;

Code:
  my @formats = volume_transfer_formats($cfg, $volid, $target_volid, $opts->{snapshot}, $opts->{base_snapshot}, $opts->{with_snapshots});
    die "cannot migrate from storage type '$scfg->{type}' to '$tcfg->{type}'\n" if !@formats;

Code:
sub volume_transfer_formats {
    my ($cfg, $src_volid, $dst_volid, $snapshot, $base_snapshot, $with_snapshots) = @_;
    my @export_formats = volume_export_formats($cfg, $src_volid, $snapshot, $base_snapshot, $with_snapshots);
    my @import_formats = volume_import_formats($cfg, $dst_volid, $snapshot, $base_snapshot, $with_snapshots);
    my %import_hash = map { $_ => 1 } @import_formats;
    my @common = grep { $import_hash{$_} } @export_formats;
    return @common;
}

If no one else picks this up, I may look into "decyphering" it next week.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
did you ever decipher this? i am running into the same issue. there's several people who have run into the same on this forum, but there doesn't seem to be any solved threads on it.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!