[SOLVED] Local-LVM status unknown on ZFS

jona

New Member
Mar 2, 2024
21
5
3
Hi community.
I've just installed latest proxmox ve on a small server with 4 SSDs connected to the backplane. I chose ZFS RAID10 in the installation to have everything on these 4 SSDs.
I updated, restarted etc and when I just wanted to deal with vms I realised that local-lvm has status unknown. I'm not sure if it was the whole time like that, but now I can't proceed.

Code:
/etc/pve/storage.cfg

dir: local
    path /var/lib/vz
    content iso,vztmpl
    shared 0

lvmthin: local-lvm
    thinpool data
    vgname pve
    content images,rootdir

I want both, local and local-lvm use the same space on this single ZFS. Am I missing anything?
 
I chose ZFS RAID10 in the installation to have everything on these 4 SSDs.
If I understand your situation correctly - you only have those 4 drives, & in fact your OS runs on ZFS, as ZFS for the root file system.
If so you will not / should not have a local-lvm, instead you should have a local-zfs.

Not only should your storage.cfg not contain a local-lvm, but it should have that local-zfs, something like this:
Code:
zfspool: local-zfs
        pool rpool/data
        sparse
        content images,rootdir

Something is just not right with your installation. How did you install? Were those disks wiped/new? Maybe you did some installation on top of another OS?

Another thing I notice is you have no backup storage setup by default, this is also not the standard installation as usually local will contain this.

Edit: Show output for the following:
Code:
zpool status

zfs list
 
Last edited:
  • Like
Reactions: cave and jsterr
Thanks for replying @gfngfn256
You're completely right. So I did all the steps again, but now with my eyes wide open, I see that initially everything's fine and there is local-zfs.
But then I join a brand new cluster with another pve that has a local-lvm, and suddenly the local-zfs changes to local-lvm and so does the status (-> unknwon).

The idea was to save all vms from server1 on server2, setup server1 without RAID-controller etc, install pve on server1, join a new cluster of server2 and migrate all vms and cts back to server1.

Like this it doesn't work. Any idea? Bug?
 
When joining a cluster, the node uses the configs from the cluster. You will have to add a new ZFS storage for that node under Datacenter -> Storage after it joined the cluster and point it to rpool/data
 
  • Like
Reactions: jona and gfngfn256
Thanks a lot @aaron
That explains... and worked somehow, but the migration has an error in the end:

Code:
storage 'local-lvm' is not available on node 'server1' (500)

What I did is to set local-lvm to server2 only and create the zfs called local-zfs for only server1. No unknown storages, but the migration doesn't work out. Any hint? :)
 
This happens - we get half the story in the first post.

The idea was to save all vms from server1 on server2, setup server1 without RAID-controller etc, install pve on server1, join a new cluster of server2 and migrate all vms and cts back to server1.
If all you want in the end in one node with its' VMs. Why not backup all VMs to a backup storage device (can be anything; PBS, NAS, external drive etc.), create the new server as required, & then attach that backup storage & restore VMs. (You should have that backup storage setup anyway).


but the migration doesn't work out
Did you choose the target storage in the migration.
 
For running vms I have that option, but neither for offline vms nor for CTs I have that option. That's where I got the error.
 
For running vms I have that option, but neither for offline vms nor for CTs I have that option. That's where I got the error.
AFAIK if you do it via GUI it has to be from local storage on the source & then the GUI will let you choose the target-storage.

Alternatively (without above restriction) you could do it by CLI with the qm migrate command & the --targetstorage <string>. See here.
 
  • Like
Reactions: jona
For LXCs you would use CLI with the pct migrate command & the --target-storage <string>. See here.

However LXC migration/moving/restoring can have their own challenges & complications.
 
Thanks again. I tried to migrate vms and cts but even if I try using the cmd I get mismatch errors:

CT:
Code:
# pct migrate 105 server1 --target-storage local-zfs --online
2024-12-18 22:21:57 starting migration of CT 105 to node 'server1' (192.168.178.65)
2024-12-18 22:21:57 found local volume 'local-lvm:vm-105-disk-0' (in current VM config)
2024-12-18 22:21:57 ERROR: storage migration for 'local-lvm:vm-105-disk-0' to storage 'local-zfs' failed - cannot migrate from storage type 'lvmthin' to 'zfspool'
2024-12-18 22:21:57 aborting phase 1 - cleanup resources
2024-12-18 22:21:57 ERROR: found stale volume copy 'local-lvm:vm-105-disk-0' on node 'server1'
2024-12-18 22:21:57 start final cleanup
2024-12-18 22:21:57 ERROR: migration aborted (duration 00:00:01): storage migration for 'local-lvm:vm-105-disk-0' to storage 'local-zfs' failed - cannot migrate from storage type 'lvmthin' to 'zfspool'
migration aborted

VM:
Code:
# qm migrate 108 server1 --targetstorage local-zfs
2024-12-18 22:16:15 starting migration of VM 108 to node 'server1' (192.168.178.65)
2024-12-18 22:16:15 found local disk 'local-lvm:vm-108-disk-0' (attached)
2024-12-18 22:16:15 copying local disk images
2024-12-18 22:16:15 ERROR: storage migration for 'local-lvm:vm-108-disk-0' to storage 'local-zfs' failed - cannot migrate from storage type 'lvmthin' to 'zfspool'
2024-12-18 22:16:15 aborting phase 1 - cleanup resources
2024-12-18 22:16:15 ERROR: migration aborted (duration 00:00:01): storage migration for 'local-lvm:vm-108-disk-0' to storage 'local-zfs' failed - cannot migrate from storage type 'lvmthin' to 'zfspool'
migration aborted


# qm migrate 108 server1 --targetstorage local-zfs --with-local-disks
2024-12-18 22:19:24 starting migration of VM 108 to node 'server1' (192.168.178.65)
2024-12-18 22:19:24 found local disk 'local-lvm:vm-108-disk-0' (attached)
2024-12-18 22:19:24 copying local disk images
2024-12-18 22:19:24 ERROR: storage migration for 'local-lvm:vm-108-disk-0' to storage 'local-zfs' failed - cannot migrate from storage type 'lvmthin' to 'zfspool'
2024-12-18 22:19:24 aborting phase 1 - cleanup resources
2024-12-18 22:19:24 ERROR: migration aborted (duration 00:00:00): storage migration for 'local-lvm:vm-108-disk-0' to storage 'local-zfs' failed - cannot migrate from storage type 'lvmthin' to 'zfspool'
migration aborted
 
Could you show the output of the /etc/pve/storage.cfg that includes the local-zfs entry of the target node.

I don't have much time now - but I may try in the future to follow up on this.
 
sure

Code:
# cat /etc/pve/storage.cfg
dir: local
    path /var/lib/vz
    content iso,vztmpl
    shared 0

lvmthin: local-lvm
    thinpool data
    vgname pve
    content images,rootdir
    nodes poseidon

nfs: ...

nfs: ...

pbs: ...

zfspool: local-zfs
    pool rpool/data
    content rootdir,images
    nodes server1
    sparse 1

the ... are just to hide the information here, as I don't think they're necessary
 
Could you show me an image of the GUI showing the target node (I believe server1) with its listed storages underneath, & select that local-zfs .

Secondly, for testing, could you try & create any new VM on that local-zfs storage.
 
Thank u very much. In the end I migrated all vms in online mode and the few cts I managed to restore from latest backup.
Now all is working as expected :)
 
  • Like
Reactions: Johannes S

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!