How to move a Truenas ZFS zvol based VM to PVE through iSCSI ?

coolnodje

New Member
Nov 25, 2024
6
2
3
I'm trying to move a UEFI boot method's VM from Truenas to PVE.
The storage is a ZFS zvol, and I can easily share it through iSCSI.

I've used a OVMF type VM on PVE.

But there's no way I can start it so far.

The storage is the zvol volume mounted with iSCSI on PVE.

I've tried different combinations for the EFI volume:
- provide the same volume as VM storage for EFI (it's located in the main storage as /boot/efi)
But I get a "guest has not initialized the display" on boot, with some CPU & RAM usage but I can't find a way to understand what's going on.

- provide a new EFI volume mounted on local storage (but I'm not sure how this could possibly work since the required info can't be found)
This allows booting and display but the message is that the main volume is not found and it just goes to the next boot method.

First off, Is this theoretically possible?
I don't see why it shouldn't, the zvol contains all needed info.
I'm hoping for a confirmation from a knowledgeable person and hopefully a direction!

Cheers
 
I got it working.
In essence, there's nothing special to do.

A VM zvol in TrueNAS, can be snapshoted, then cloned to a dataset (Zvol) and then shared through iSCSI.

You then mount this volume in PVE Storage with iSCSI, checking "Use LUN directly" since the volume is dedicated to the VM.

If the VM is using UEFI as boot mode on TrueNAS, then you need to keep the same mode on PVE and use the OVMF type boot.

But you must unckeck EFI volume management option, to make sure it's read from the main volume:

```
Add EFI Disk: uncheck
```

And boom, it does boot with regular options!

A very good start, but only the beginning of the troubles: network interface will have a different name, you have potential issues with additional volumes and shares that were mounted on the original VM.

Hardware differences (CPU) seem to play well in my case even though I am using Host type CPU on both sides, but it could be a potential issue down the line with software using specific extentions.


Here's how you can do a pre-check of the volume on both sides to make sure you're using what you should:

On the SAN sharing the volume with iSCSI (TrueNAS), you can see the volume with its partitions with fdisk:
```
freenas% sudo fdisk -l /dev/zvol/tank/ubuntudockerhost-0vhbns-clone
Disk /dev/zvol/tank/ubuntudockerhost-0vhbns-clone: 80 GiB, 85899345920 bytes, 167772160 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 16384 bytes
I/O size (minimum/optimal): 16384 bytes / 33554432 bytes
Disklabel type: gpt
Disk identifier: F02887A7-E662-44BC-83A5-6905B15FF0EE

Device Start End Sectors Size Type
/dev/zvol/tank/ubuntudockerhost-0vhbns-clone1 2048 1050623 1048576 512M EFI System
/dev/zvol/tank/ubuntudockerhost-0vhbns-clone2 1050624 3147775 2097152 1G Linux filesystem
/dev/zvol/tank/ubuntudockerhost-0vhbns-clone3 3147776 167772126 164624351 78.5G Linux filesystem
```

Mounting this on PVE with Use LUN directly, you get a new blk device on PVE which you can identify with lsblk, and also check with fdisk
```
root@pve:~# fdisk -l /dev/sde
Disk /dev/sde: 80 GiB, 85899345920 bytes, 167772160 sectors
Disk model: iSCSI Disk
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 16384 bytes
I/O size (minimum/optimal): 16384 bytes / 8388608 bytes
Disklabel type: gpt
Disk identifier: F02887A7-E662-44BC-83A5-6905B15FF0EE

Device Start End Sectors Size Type
/dev/sde1 2048 1050623 1048576 512M EFI System
/dev/sde2 1050624 3147775 2097152 1G Linux filesystem
/dev/sde3 3147776 167772126 164624351 78.5G Linux filesystem
```
 
Last edited:
  • Like
Reactions: waltar