[SOLVED] zfs went missing

tylerhoot

New Member
Oct 21, 2018
5
1
3
30
hi i just got the server readded to the cluster and then i noticed it changed the zfs in to a lvm-thin which i thought it should not of done and i cannot migrate any vms back to the node ether.

here is the error that the migration gave.
Volume group "pve" not found
Cannot process volume group pve
command '/sbin/lvs --separator : --noheadings --units b --unbuffered --nosuffix --options vg_name,lv_name,lv_size,lv_attr,pool_lv,data_percent,metadata_percent,snap_percent,uuid,tags,metadata_size pve' failed: exit code 5
send/receive failed, cleaning up snapshot(s)..
2019-11-17 19:39:35 ERROR: Failed to sync data - command 'set -o pipefail && pvesm export local-lvm:vm-108-disk-1 raw+size - -with-snapshots 0 | /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=proxmox' root@192.168.1.8 -- pvesm import local-lvm:vm-108-disk-1 raw+size - -with-snapshots 0' failed: exit code 5
2019-11-17 19:39:35 aborting phase 1 - cleanup resources
2019-11-17 19:39:35 ERROR: found stale volume copy 'local-lvm:vm-108-disk-1' on node 'proxmox'
2019-11-17 19:39:35 ERROR: migration aborted (duration 00:00:02): Failed to sync data - command 'set -o pipefail && pvesm export local-lvm:vm-108-disk-1 raw+size - -with-snapshots 0 | /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=proxmox' root@192.168.1.8 -- pvesm import local-lvm:vm-108-disk-1 raw+size - -with-snapshots 0' failed: exit code 5
 
Seems you are using different default storage types on your servers. In general, I would avoid that.

Anyways, you can simply re-add the zfs storage using the gui on that node (but restrict the storage to this
node because it is not available on the other cluster nodes).

Or simply reinstall the node and use lvm-thin as you do with your other cluster nodes.