How to restore ZFS datasets?

proxmoxajin

Active Member
Apr 2, 2020
35
0
26
24
Both nodes in the cluster have ZFS file systems.

One node (P2) executed
root@172.31.1.12_P2:/# zfs send zfsP2/vm-199-disk-0@vm199-231204 | ssh 172.31.1.11 zfs recv p1raid0/Fp2-199
and observed data transmission process.

Another node (P1) executed
root@172.31.1.11_P1:/# zfs list -rt all p1raid0 NAME USED AVAIL REFER MOUNTPOINT p1raid0 86.6G 1.67T 96K /p1raid0 p1raid0/Fp2-199 15.3G 1.67T 15.3G - p1raid0/Fp2-199@vm199-231204 0B - 15.3G - p1raid0/Fp2-400 9.84G 1.67T 9.84G - p1raid0/Fp2-400@base400-231204 0B - 9.84G - p1raid0/Fp2-401 17.2G 1.67T 17.2G - p1raid0/Fp2-401@vm401-231204 0B - 17.2G - p1raid0/Fp2-801 7.95G 1.67T 7.95G - p1raid0/Fp2-801@vm801-231204 0B - 7.95G - p1raid0/Fp2base-800 6.99G 1.67T 6.99G - p1raid0/Fp2base-800@base800-231204 0B - 6.99G - p1raid0/fromp2zfs 14.3G 1.67T 14.3G - p1raid0/fromp2zfs@vm192-231204 0B - 14.3G - p1raid0/vm-191-disk-0 14.8G 1.67T 14.8G - p1raid0/vm-288-disk-0 241M 1.67T 241M -

Can anyone tell me how to restore the virtual machine transferred from P2 on P1?
Thanks for the advice!;)
 
Last edited:
Hi,
is there a particular reason why you choose to move the VM via invoking the zfs send/recv command and not taking the onboard Proxmox VE VM migration tools (I am assuming both nodes are part of the same cluster)? That would render this task way more easy for you.

I further see that you transfer only snapshots of the VM with a different naming scheme, for the disks to be correctly detected, you have to follow the same vm-199-disk-0 naming scheme on the receiving side.

Nevertheless, in order for the VM to be able to start on the new node, you will have to make sure that:
  1. The naming scheme of the volumes on the target node is the same as on the source node
  2. p1raid0 is correctly configured as storage for VM images on node P1 (check cat /etc/pve/storage.cfg)
  3. The VM config is located on /etc/pve/nodes/<hostname>/qemu-server/, with <hostname> being your node, if that is not the case, you will have to move it to this location.
  4. Make sure the disks in the VM config are updated to contain the new storage name.
  5. Upgrade all further config references (e.g. pass-through configs, other disks, attached installation isos, ecc.) which might not be present on the new node.
 
Hi,first of all thank you for your patient reply.

Sorry, my previous statement was inaccurate. Previously, corosync.conf was modified, causing the cluster to fail. I am testing the zfs system recently, so I try to use zfs send to transfer the virtual machine to another node......
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!