Convert local-lvm to zfs on a single disk cluster-installation

sender

Member
Apr 9, 2021
57
0
11
47
So I have added my 2nd node in a proxmox cluster :-). I wanted to use "replicate" and discovered I needed ZFS for that... bump.

So I have been crawling this forum and google and fund no clear answer. Without reinstalling the entire host, is it possible to convert local-lvm to zfs?

What is the best way to get to a dual-single-node-cluster (2x intel NUC8) with ZFS on the NVME drives?
 
Without reinstalling the entire host, is it possible to convert local-lvm to zfs?
It is not possible to convert the local-lvm (disk which has the OS on it) to ZFS on the fly. Either you add more disks and use those for a ZFS pool or reinstall the nodes and select ZFS during the installation. If you do reinstall, setting up a RAID 1 (mirror) with ZFS would be best if you do have the disks available for it.
If you do have a cluster already and guests running, you can either move the VMs to the other node, then remove one of the nodes from the cluster (https://pve.proxmox.com/pve-docs/chapter-pvecm.html#_remove_a_cluster_node) before you reinstall it and add it again to the cluster.

If you only have a 2node cluster, consider getting the QDevice up and running to have a third vote in order to have the cluster functioning normally should one of the 2 nodes be down (could run on an raspberry pi) https://pve.proxmox.com/pve-docs/pve-admin-guide.html#_corosync_external_vote_support
 
It is not possible to convert the local-lvm (disk which has the OS on it) to ZFS on the fly. Either you add more disks and use those for a ZFS pool or reinstall the nodes and select ZFS during the installation. If you do reinstall, setting up a RAID 1 (mirror) with ZFS would be best if you do have the disks available for it.
If you do have a cluster already and guests running, you can either move the VMs to the other node, then remove one of the nodes from the cluster (https://pve.proxmox.com/pve-docs/chapter-pvecm.html#_remove_a_cluster_node) before you reinstall it and add it again to the cluster.

If you only have a 2node cluster, consider getting the QDevice up and running to have a third vote in order to have the cluster functioning normally should one of the 2 nodes be down (could run on an raspberry pi) https://pve.proxmox.com/pve-docs/pve-admin-guide.html#_corosync_external_vote_support
I'm sorry, I know I'm reviving an old thread. But this is good information and I want to make sure I proceed carefully and get the correct info in context. Please pardon any improperly used terms.

I have 2 identical Chromebox nodes with only a single internal storage device in each. I'm using these to host lightweight services that I can replicate between the two nodes for failover/HA. I mistakenly set them both up with default LVM storage before I knew I needed ZFS.

You mentioned the idea of moving guests between nodes in the cluster, reinstalling the OS and setting it up for ZFS them moving the guests to the ZFS storage on the updated node. I'm assuming there are no issues having nodes in a cluster with different storage types? There's no issues moving guests from LVM to ZFS in the cluster? Will this also work with containers?

I see your mention of a QDevice for assisting a 2 node cluster during failures. If I planned to introduce at least one more node into my environment, is this still required/recommended?

I appreciate your help!
 
I'm assuming there are no issues having nodes in a cluster with different storage types? There's no issues moving guests from LVM to ZFS in the cluster? Will this also work with containers?
You are correct. The migration makes sure that the disks are stored in the correct format for the target storage. When you migrate, you will need to specify the different target storage.
Make sure that you name the ZFS pool the same in both nodes, so that you can use the replication feature, should you need / want it :)

I see your mention of a QDevice for assisting a 2 node cluster during failures. If I planned to introduce at least one more node into my environment, is this still required/recommended?
If you add a third full node, then there is no need for the QDevice. The main thing is that you need a majority (quorum) for the cluster nodes to work. Therefore, at least 3 votes (2 nodes + 1 Qdevice or 3 nodes) so that one can be down and the remaining nodes still have more than 50% of the votes.
 
  • Like
Reactions: sitruk
hi, apologies as well for touching on an old subject, but same or similar problem on my side. Seems we all start with LVM for some reason...

My situation is a bit different, I have two nodes setup in closer mode, but both have 8 drives each, ichi are setup:
node1 (screen from Disk section in promos attached):
2 drives = RAID1 (system - sda)
6 drives - RAID5 (local storage - sdb)


My question is, if I would migrate all VMs from node 1 to node 2 (hdd for VMs are on the 6 drive setup), could I just remove the lvm storage and reformat it as ZFS? then migrate again, this time node 2 to 1, and do the same....

Thank you
 

Attachments

  • Screenshot 2025-01-04 at 14.48.44.png
    Screenshot 2025-01-04 at 14.48.44.png
    65.1 KB · Views: 15
Your pve os is installed on lvm too so that's would mean a new installation.
Do you have 2 single pve's or a 2-node cluster which is NOT recommend as it should be minimum 3-nodes then ?!
Btw. freeing up 1 node, reinstalling and then migrate all from node-2 to node-1 and vice versa for other node is possible.
 
Your pve os is installed on lvm too so that's would mean a new installation.
Do you have 2 single pve's or a 2-node cluster which is NOT recommend as it should be minimum 3-nodes then ?!
Btw. freeing up 1 node, reinstalling and then migrate all from node-2 to node-1 and vice versa for other node is possible.
thank you for your response, so I need to get both sda and sdb setup to ZFS, shame.

2 node cluster, but the third node in in progress, will be connected soon.
 
Than would begin with the third node first. And if this one isn't normally sized I would prefere a super small one real pve node instead of just a Q-device because of testing anythink also which isn't possible with a Q-device.
 
Good morning, my problem is the followingI have 2 nodes in a clusterEach node has 2 separate disks of 10TB eachIn the first node there is data01 as LVM and the second as ZFSIn the second node there is data01 as LVM and the second as LVMI want to change the second to ZFS so that it can connect since it does not allow me because it has different extensionsAny idea on how to change from LVM-ZFS
 

Attachments

  • Sin título.jpg
    Sin título.jpg
    11.6 KB · Views: 6
Any idea on how to change from LVM-ZFS
There is no option to do it on the fly. You will have to free it up, destroy the LVM (and the storage config in Datacenter->Storage), then create the ZFS pool on the empty disk.