Unable to import ZFS Pool

unter_hosen

New Member
Dec 9, 2020
3
1
3
42
Hi All,

I really hope someone can help me out with this. I stupidly assumed that once I had gotten rid of ESXi and installed Proxmox that I would be able to import the ZFS pool back into FreeNAS once I had rebuilt FreeNAS as a VM.

I can see the disks on Proxmox,

Code:
root@pve:~# ls /dev/disk/by-id/
ata-ST4000VN008-2DR166_ZM40M5FE                                              
ata-ST4000VN008-2DR166_ZM40M5FE-part1                                        
ata-ST4000VN008-2DR166_ZM40M9ER                                             
ata-ST4000VN008-2DR166_ZM40M9ER-part1                                       
ata-WDC_WD40EFRX-68N32N0_WD-WCC7K5KU6SLK                                 
ata-WDC_WD40EFRX-68N32N0_WD-WCC7K5KU6SLK-part1                       
ata-WDC_WD40EFRX-68N32N0_WD-WCC7K7DDV0FD                                   
ata-WDC_WD40EFRX-68N32N0_WD-WCC7K7DDV0FD-part1

and I have tried to add them to the FreeNAS VM using both

Code:
qm set 101 -scsi1 /dev/disk/by-id/ata-ST4000VN008-2DR166_ZM40M5FE
qm set 101 -scsi2 /dev/disk/by-id/ata-ST4000VN008-2DR166_ZM40M9ER
qm set 101 -scsi3 /dev/disk/by-id/ata-WDC_WD40EFRX-68N32N0_WD-WCC7K5KU6SLK
qm set 101 -scsi4 /dev/disk/by-id/ata-WDC_WD40EFRX-68N32N0_WD-WCC7K7DDV0FD

and this

Code:
qm set 101 -scsi1 /dev/sda
qm set 101 -scsi2 /dev/sdb
qm set 101 -scsi3 /dev/sdc
qm set 101 -scsi4 /dev/sdd

I have also tried virtio, ide, sata and whilst FreeNAS sees the disks, I cannot import the pool.

This is what the VMX file looked like from ESXi

Code:
scsi0:1.deviceType = "scsi-hardDisk"
scsi0:1.fileName = "/vmfs/volumes/5d77dcd4-5c462740-6f56-001517d69db3/FreeNAS/FreeNAS_1.vmdk"
sched.scsi0:1.shares = "normal"
sched.scsi0:1.throughputCap = "off"
scsi0:1.present = "TRUE"
scsi0:2.deviceType = "scsi-hardDisk"
scsi0:2.fileName = "/vmfs/volumes/5d77dce1-f61eccd0-4db7-001517d69db3/FreeNAS/FreeNAS_2.vmdk"
sched.scsi0:2.shares = "normal"
sched.scsi0:2.throughputCap = "off"
scsi0:2.present = "TRUE"
vmci0.id = "1390000549"
cleanShutdown = "TRUE"
sata0:0.startConnected = "FALSE"
extendedConfigFile = "FreeNAS.vmxf"
scsi0:3.deviceType = "scsi-hardDisk"
scsi0:3.fileName = "/vmfs/volumes/5e7a3d0d-08bf87f0-16bb-b42e998079b1/FreeNAS/FreeNAS_3.vmdk"
sched.scsi0:3.shares = "normal"
sched.scsi0:3.throughputCap = "off"
scsi0:3.present = "TRUE"
scsi0:4.deviceType = "scsi-hardDisk"
scsi0:4.fileName = "/vmfs/volumes/5e7a3d1c-0784c2b1-ff0b-b42e998079b1/FreeNAS/FreeNAS_4.vmdk"
sched.scsi0:4.shares = "normal"
sched.scsi0:4.throughputCap = "off"
scsi0:4.present = "TRUE"
scsi0:4.redo = ""
scsi0:2.redo = ""
scsi0:0.redo = ""
scsi0:1.redo = ""
scsi0:3.redo = ""

Hoping someone might have an idea of how I can mount the pool again without losing all the data as its historical CCTV data.

Thanks for looking,
 
How did you migrate the the VM - I assume that the FreeNAS was a VM on your ESXI?
from the esxi config it seems that it used to have 4 .vmdk files as disk-images - while the commands you post from PVE indicate that you try to
attach 4 physical disks to the VM?

also (not 100% sure) but disks used in a ZFS pool usually have 2 partitions created (afair also on FreeBSD/FreeNAS) - while the one's you pass through have only one...

what's the output of:
Code:
qm config 101
lsblk
blkid
when run on the PVE-node
 
Hey Stoiko,

Thanks so much for the reply.

I didnt actually "migrate" the VM as such. Basically I just installed Proxmox in place of ESXi on the same hardware and then was hoping I could just passthrough the disks to a new FreeNAS instance in Proxmox and all would be good, but alas, I was wrong.

You are correct, FreeNAS was a VM on ESXi.

Just for clarity, the Western Digital disks were the original disks and then the two Seagate disks were added and the pool extended, if that makes sense.

I tried to add the disks to the VM using the various methods described above and whilst FreeNAS can see the disks, the existing pool isnt available for import, hence why the 4 disks are not in the config at the moment.

Code:
root@pve:~# qm config 101
boot: order=scsi0;ide2
cores: 2
hostpci0: 04:10.2
ide2: none,media=cdrom
machine: q35
memory: 8192
name: FreeNAS
numa: 0
onboot: 1
ostype: l26
scsi0: local-lvm:vm-101-disk-0,size=50G
scsihw: virtio-scsi-pci
smbios1: uuid=eb92fd1e-f16d-4e09-b0ef-e8e346266ef5
sockets: 1
vmgenid: 14de45e4-6dd6-4e5c-92b2-95dd7120ac30
root@pve:~#

Code:
root@pve:~# lsblk
NAME                         MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda                            8:0    0   3.7T  0 disk
└─sda1                         8:1    0   3.7T  0 part
sdb                            8:16   0   3.7T  0 disk
└─sdb1                         8:17   0   3.7T  0 part
sdc                            8:32   0   3.7T  0 disk
└─sdc1                         8:33   0   3.7T  0 part
sdd                            8:48   0   3.7T  0 disk
└─sdd1                         8:49   0   3.7T  0 part
nvme0n1                      259:0    0 953.9G  0 disk
├─nvme0n1p1                  259:1    0  1007K  0 part
├─nvme0n1p2                  259:2    0   512M  0 part
└─nvme0n1p3                  259:3    0 953.4G  0 part
  ├─pve-swap                 253:0    0     8G  0 lvm  [SWAP]
  ├─pve-root                 253:1    0    96G  0 lvm  /
  ├─pve-data_tmeta           253:2    0   8.3G  0 lvm 
  │ └─pve-data-tpool         253:4    0 816.7G  0 lvm 
  │   ├─pve-data             253:5    0 816.7G  0 lvm 
  │   ├─pve-vm--100--disk--0 253:6    0   256G  0 lvm 
  │   └─pve-vm--101--disk--0 253:7    0    50G  0 lvm 
  └─pve-data_tdata           253:3    0 816.7G  0 lvm 
    └─pve-data-tpool         253:4    0 816.7G  0 lvm 
      ├─pve-data             253:5    0 816.7G  0 lvm 
      ├─pve-vm--100--disk--0 253:6    0   256G  0 lvm 
      └─pve-vm--101--disk--0 253:7    0    50G  0 lvm

Code:
root@pve:~# blkid
/dev/nvme0n1p2: UUID="5BCD-4CCC" TYPE="vfat" PARTUUID="2d4302d9-4883-4085-85a4-cc4a9994118d"
/dev/nvme0n1p3: UUID="ETk3Lv-YjfS-3cUf-IuZ0-wjuY-8CZR-iHPiHH" TYPE="LVM2_member" PARTUUID="19ceb470-ab16-40bc-98c2-f0774aded177"
/dev/sda1: UUID_SUB="5d77dce0-dba57aeb-bec0-001517d69db3" TYPE="VMFS_volume_member" PARTUUID="769fb87b-66b4-4528-90dc-9612333b35ae"
/dev/sdd1: UUID_SUB="5e7a3d1b-f001d580-22ae-b42e998079b1" TYPE="VMFS_volume_member" PARTUUID="dd7bc2fb-590d-497d-b19c-0ba5e3adab44"
/dev/sdb1: UUID_SUB="5d77dcd3-41a36f44-7a68-001517d69db3" TYPE="VMFS_volume_member" PARTUUID="dc6a1d1a-82dd-4e2c-b6bf-2fb74b850766"
/dev/sdc1: UUID_SUB="5e7a3d0d-f2cbbb45-62b7-b42e998079b1" TYPE="VMFS_volume_member" PARTUUID="29b34912-b5ac-4064-943b-5e9b108eae15"
/dev/mapper/pve-swap: UUID="7fd9a94c-b963-477a-8704-8a119a1f82c3" TYPE="swap"
/dev/mapper/pve-root: UUID="a071d7d6-37ae-4e17-b9d2-9aa512de49d1" TYPE="ext4"
/dev/nvme0n1: PTUUID="8f2edd33-7401-42ca-9568-c5e1092508ac" PTTYPE="gpt"
/dev/nvme0n1p1: PARTUUID="17e9ea4d-6313-4a49-a575-1c5dba5e6297"
/dev/mapper/pve-vm--100--disk--0: PTUUID="c3f485d7-7c1f-418c-ba6d-703beb62968a" PTTYPE="gpt"
/dev/mapper/pve-vm--101--disk--0: PTUUID="baa7252f-39a9-11eb-aa8b-dd6370888ea5" PTTYPE="gpt"
root@pve:~#

Any help greatly appreciated.

Thanks

Unter
 
hm - as guessed - the 4 virtual disks inside your FreeNAS, were actually .vmdk files stored on the 4 hardware disks (on vmware's VMFS).
You need to get the .vmdk files out and move them to a storage which PVE understands.

Probably the cleanest way is to boot back into ESXi, and use it's tools to export the disks
Alternatively get the .vmdk files on a NFSshare/ ext4 or other linux filesystem, and attach the images to the VM-config (then you could move the disks to a PVE storage)
last but not least I found the following page with a quick search:
http://woshub.com/how-to-access-vmfs-datastore-from-linux-windows/
seems like you could use that to mount the vmfs inside linux

However I have never tried the above tools - so cannot speak to their quality (in other words - make sure you have a good and working backup before proceeding)

I hope this helps!
 
Stoiko,

Thanks so much for your help. Will give those options a try and see what happens.

Backing up the flat vmdk was not an option because I dont have 8TB of space anywhere else to back it up.

Thanks again and have a good Christmas.

Regards,

Unter
 
  • Like
Reactions: Stoiko Ivanov

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!