VM Migration from Node 2 to Node 3 (Both in the same Cluster)

Jul 12, 2022
35
2
13
Hello everyone,

I encountered a problem while trying to perform a migration:

Code:
2023-07-26 13:51:17 starting migration of VM 2000 to node 'Jobcluster-BMS' (x.x.x.x)

2023-07-26 13:51:17 found local disk 'local-zfs:vm-2000-disk-0' (in current VM config)

2023-07-26 13:51:17 copying local disk images

2023-07-26 13:51:18 full send of rpool/data/vm-2000-disk-0@__migration__ estimated size is 16.9G

2023-07-26 13:51:18 total estimated size is 16.9G

2023-07-26 13:51:19 command 'zfs recv -F -- rpool/data/vm-2000-disk-0' failed: open3: exec of zfs recv -F -- rpool/data/vm-2000-disk-0 failed: No such file or directory at /usr/share/perl5/PVE/Tools.pm line 455.

2023-07-26 13:51:19 command 'zfs send -Rpv -- rpool/data/vm-2000-disk-0@__migration__' failed: got signal 13

send/receive failed, cleaning up snapshot(s)..

2023-07-26 13:51:19 ERROR: storage migration for 'local-zfs:vm-2000-disk-0' to storage 'local-zfs' failed - command 'set -o pipefail && pvesm export local-zfs:vm-2000-disk-0 zfs - -with-snapshots 0 -snapshot __migration__ | /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=Jobcluster-BMS' root@x.x.x.x -- pvesm import local-zfs:vm-2000-disk-0 zfs - -with-snapshots 0 -snapshot __migration__ -delete-snapshot __migration__ -allow-rename 1' failed: exit code 2

2023-07-26 13:51:19 aborting phase 1 - cleanup resources

2023-07-26 13:51:19 ERROR: migration aborted (duration 00:00:02): storage migration for 'local-zfs:vm-2000-disk-0' to storage 'local-zfs' failed - command 'set -o pipefail && pvesm export local-zfs:vm-2000-disk-0 zfs - -with-snapshots 0 -snapshot __migration__ | /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=Jobcluster-BMS' root@x.x.x.x -- pvesm import local-zfs:vm-2000-disk-0 zfs - -with-snapshots 0 -snapshot __migration__ -delete-snapshot __migration__ -allow-rename 1' failed: exit code 2

TASK ERROR: migration aborted

Does anyone know the cause of this issue?
It would be great if you could help me :)

(Note: I replaced the actual IP address with 'x.x.x.x' for privacy.)
 
Last edited:
Is this Node 2 and 3 (a bit confusing from title) ?
Is "Jobcluster-BMS" the destination node? Seems you do not have "local-zfs" (basically no ZFS) on this node, is that correct ?
I have 3 nodes
The first 2 nodes works fine, the migration is done without errors.
The 3rd one is the new one ("Jobcluster-BMS") and is the destination node.
 
If Jobcluster-BMS does not have a local ZFS pool ("local-zfs"), you need to edit the cluster storage so it does not apply to this node.
In essence you need to disable it as you cannot activate it (zpool command not found means no ZFS)
That way it will not try to migrate to something that does not exist.
 
If Jobcluster-BMS does not have a local ZFS pool ("local-zfs"), you need to edit the cluster storage so it does not apply to this node.
In essence you need to disable it as you cannot activate it (zpool command not found means no ZFS)
That way it will not try to migrate to something that does not exist.
Why i cant activate zfs?
I could even reinstall the node caus its empty.
 
Why i cant activate zfs?
I could even reinstall the node caus its empty.
I have found the following errors in my log

Code:
Jul 26 16:58:39 Jobcluster-BMS pvedaemon[66589]: zfs error: open3: exec of zpool list -o name -H rpool failed: No such file or directory at /usr/share/perl5/PVE/Tools.pm line 455.
Jul 26 16:58:39 Jobcluster-BMS pvedaemon[66589]: zfs error: open3: exec of zpool list -o name -H rpool failed: No such file or directory at /usr/share/perl5/PVE/Tools.pm line 455.
Jul 26 16:58:39 Jobcluster-BMS pvedaemon[66589]: could not activate storage 'local-zfs', zfs error: open3: exec of zpool import -d /dev/disk/by-id/ -o cachefile=none rpool failed: No such file or directory at /usr/share/perl5/PVE/Tools.pm line 455.
 
No "rpool" means you have not installed using ZFS.
By install I mean install from scratch (boot using ZFS).
How many disks have you used? Is this node same disks as the others ?
Can you list the status of the node's disks ?
 
Last edited:
So you can use as is, or reinstall if you want Proxmox to manage disks using ZFS.
But best practice would be to have data storage pool separate from boot (which does not have to be redundant, just add a small boot disk.)

As for original issue migration will work but very slow, especially when you can't use ZFS send/receive.
Depends if you anticipate a lot of live migrations, personally I never do, so backup/restore is faster/easier.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!