Move disk to ZFS failed (being new to ZFS)

bkraegelin

Renowned Member
On a test server pair I installed Proxmox 5.1 and created a ZFS pool.

Configuration:
2 servers as a cluster running Proxmox 5.1
each server has one internal RAID disk (RAID 10, 14 disks)
and one external RAID disk (RAID 6, 10 disks), external disk is used as ZFS storage
(YES, only one disk, redundancy done via hardware RAID controller)

I installed some KVM and some LXC vms. Newly installed ones and restored ones
from Proxmox 4.4

I want to use live migration and storage replication and tried different things:
- live migration using GUI fails (no "with-local-disks")
- live migration using bash works (lvm-thin to lvm-thin)
- move disk online to ZFS using GUI fails (100%, but never finishes)
- move disk offline to ZFS using GUI also fails
- move disk to ZFS using bash fails
(parameter verification failes, disk vm-2099-disk-1 does not have value in enumeration ....)

Haven't worked with ZFS before, maybe I have misconfigured it or I forgot some steps.

Any help?

Thanks, Birger
 
First of all if you use RAID controler - dont use ZFS. Raid controllers not in IT mode can destroy ZFS pool. Use LVM or simple file system.

Why you cannot migrate - Proxmox support will tell you.
 
Thanks for this information. I managed to read a little about this.

This seems to be a very small risk (bit changes, RAID controller cannot detect). This will destroy other filesystems too, maybe you will loose only some files if it happens. As I cannot find any posts about damages really happening, it seems to be somewhat theoretically...

Running ZFS over RAID should be the same situation as formatting only one disk with ZFS.

Not yet going to production, I will rethink the setup later.