Mixed file systems

Spartan67

Member
Apr 20, 2021
25
3
8
I have an existing cluster with all nodes using ZFS (all current nodes are raid 0). I decided to add a new node using the EXT4 filesystem after joining that new node to my existing cluster I cannot see the LVM file system.
 
joining a cluster means the joining node gets all the cluster-wide config from the existing cluster, including storage.cfg. you need to re-add the LVM storage (and probably should limit it to the one node that has it available).
 
joining a cluster means the joining node gets all the cluster-wide config from the existing cluster, including storage.cfg. you need to re-add the LVM storage (and probably should limit it to the one node that has it available).
Hi Fabian...

Sorry my explanation was not complete... I went into the cluster configuration for disks and added the LVM-Thin for the new node only and also disabled ZFS from being seen on the new node. But I cannot see the disk to migrate VM's to it. I can see it to create a new VM though.
 
Last edited:
are you creating on/migrating to the right node? is the content type setting for the storage correct?
 
are you creating on/migrating to the right node? is the content type setting for the storage correct?
So more experimentation shows that I can migrate a VM that is running just not offline VM's...

With the running VM when migrating I can choose the LVM storage but with offline there is no option to choose 'target storage' is that by design...?
 
yeah - it's available on the CLI/API (if the storage combination supports it), but not on the GUI yet.
 
Okay Fabian... would you be able to point me to the CLI to do that...?

Otherwise it seems I've figured everything else out.

Moving to EXT4 since I have my eyes on a couple servers with hardware raid now that I've use Proxmox for nearly a year now and LOVE it... and also ZFS seems to consume quite a bit of memory.


Thanks...
 
qm migrate VMID TARGETNODE --with-local-disks --targetstorage STORAGEMAP

where STORAGEMAP can be either a single storage, or a list of mappings in the form 'SOURCE:TARGET', or a combination of both. for example, to move disks on storage 'storage1' to storage 'storage2' on the target node, and all other storages to 'storage3', you'd use 'storage1:storage2,storage3'
 
  • Like
Reactions: Spartan67
I have tried multiple iterations of the command with no success. The problem being the --targetstorage option. I have read several articles online and tried different strings and all with no success.


ERROR: migration aborted (duration 00:00:01): storage migration for 'local-zfs:base-101-disk-0' to storage 'local-lvm' failed - cannot migrate from storage type 'zfspool' to 'lvmthin'
migration aborted
 
yeah, zfs only supports offline migration to zfs - you'd need to first move the disk to some other storage, or use live-migration.
 
yeah, zfs only supports offline migration to zfs - you'd need to first move the disk to some other storage, or use live-migration.
Oddly enough live migration will not work either:

ERROR: migration aborted (duration 00:00:00): storage migration for 'local-zfs:vm-206-disk-0' to storage 'local-lvm' failed - cannot migrate from storage type 'zfspool' to 'lvmthin'
TASK ERROR: migration aborted
 
please post the full log (the message indicates that disk is migrated offline, which can happen for various reasons)
 
2021-11-24 10:39:46 starting migration of VM 206 to node 'sn-pve-32' (192.168.5.32)
2021-11-24 10:39:46 found local disk 'local-zfs:vm-206-disk-0' (via storage)
2021-11-24 10:39:46 found local disk 'local-zfs:vm-206-disk-1' (via storage)
2021-11-24 10:39:46 found local disk 'local-zfs:vm-206-disk-2' (in current VM config)
2021-11-24 10:39:46 copying local disk images
2021-11-24 10:39:46 ERROR: storage migration for 'local-zfs:vm-206-disk-0' to storage 'local-lvm' failed - cannot migrate from storage type 'zfspool' to 'lvmthin'
2021-11-24 10:39:46 aborting phase 1 - cleanup resources
2021-11-24 10:39:46 ERROR: migration aborted (duration 00:00:00): storage migration for 'local-zfs:vm-206-disk-0' to storage 'local-lvm' failed - cannot migrate from storage type 'zfspool' to 'lvmthin'
TASK ERROR: migration aborted
 
so you have two disks on ZFS that are not used by the VM currently - these can only be migrated offline / on the storage level. if you don't need them, remove them before the migration..
 
So weird thing is there was only 1 disk listed in the VM hardware. I had this issue with two other VM's on that node. You would try migrating and it would claim there were other disks on the VM. Additional strange thing is I know when I initially built the VM I used disk 0 so I'm also baffled on how the VM moved to disk 2.

Proxmox Backup Server saved the day though. Shut down the VM and did a restore on the new node and was good to go. Was good practice since this was the first time I actually used the restore function and it worked beautifully...!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!