Cluster with both zfs and lvm

Patrik Stenfeldt

New Member
Jan 6, 2019
17
0
1
49
Hi all.

I just found proxmox a few days ago and since i wanted to move away from esxi for a while i installed a machine with it.

SInce then i have retired one of my esxi hosts to run proxmox on it and have a ton of issues (but im saving that for another thread).

It is now up and running and im trying to start a cluster. The test machine only have a single hdd and the next machine if fitted with a raid which i decided to configure as zfs.

If i simply connect it to the cluster the local-zfs dissapear.

How do i make a mixed file-system cluster?
 
Hi,

Proxmox Storage is cluster-wide.
When you add a node to a cluster you will get the storage conf of the cluster and loos your local one.
In your case, you have to add the local-zfs again and restrict it to the node where it exists.

see
https://pve.proxmox.com/wiki/Storage
 
  • Like
Reactions: Moses93
Hi!

I'm on the same situation as PAtrik and I can't find a solution. I'm trying to add a node with ZFS-local storage to a cluster with LVM-thin storage. When I add the ZFS node, the local-zfs storage from the node just dissapear and it's replaced with the local-lvm storage from the cluster (up to this point, normal behaivor). If I try to add a ZFS storage to that node as wolfang said there is no ZFS pool selectable fom the menu in datacenter storage.

I'm new in proxmox, what am I missing?

Thanks!

ZFS Pool.PNG

Nodes.PNG
 
Here is the solution if someone is in the same situation:
Once the node with ZFS storage is member of the cluster, open a shell on any node and edit the /etc/pve/storage.cfg configuration file:
you have to add:

zfspool: name_of_the pool
pool rpool/data
content rootdir,images
mountpoint /rpool/data
nodes nodename_which_will_mount_ZFS_Storage
sparse 0


for example, for mounting a pool named pvo-pool-ssd on server name pvo:

zfspool: pvo-pool-ssd
pool rpool/data
content rootdir,images
mountpoint /rpool/data
nodes pvo
sparse 0

Cheers!
 
Here is the solution if someone is in the same situation:
Once the node with ZFS storage is member of the cluster, open a shell on any node and edit the /etc/pve/storage.cfg configuration file:
you have to add:

zfspool: name_of_the pool
pool rpool/data
content rootdir,images
mountpoint /rpool/data
nodes nodename_which_will_mount_ZFS_Storage
sparse 0


for example, for mounting a pool named pvo-pool-ssd on server name pvo:

zfspool: pvo-pool-ssd
pool rpool/data
content rootdir,images
mountpoint /rpool/data
nodes pvo
sparse 0

Cheers!
Thank you; this helped me hugely!!
 
Here is the solution if someone is in the same situation:
Once the node with ZFS storage is member of the cluster, open a shell on any node and edit the /etc/pve/storage.cfg configuration file:
you have to add:

zfspool: name_of_the pool
pool rpool/data
content rootdir,images
mountpoint /rpool/data
nodes nodename_which_will_mount_ZFS_Storage
sparse 0


for example, for mounting a pool named pvo-pool-ssd on server name pvo:

zfspool: pvo-pool-ssd
pool rpool/data
content rootdir,images
mountpoint /rpool/data
nodes pvo
sparse 0

Cheers!
Just worked for me. Thanks!
Let me migrate over so now I can re-do the original also zfs raid1 and have it all kosher.

Cheers.
 
Hello,

so this happened to me as well. I had a 3 node cluster and wanted to replace a node. So I went and deleted one. I followed the instructions in the documentation
https://pve.proxmox.com/pve-docs/pve-admin-guide.html#_remove_a_cluster_node
it did not say anything about removing storage.
When I added a new node with ZFS root filesystem the local-lvm storage stopped functioning. I could migrate a a VM onto the node but could not migrate away. It posted the following error
Code:
ERROR: Problem found while scanning volumes - no such logical volume pve/data
so as mentioned here I went to edit the file /etc/pve/storage.cfg
(Interestingly I found mentions of storages from my old node. I deleted them)
Then I modified

lvmthin: local-lvm
thinpool data
vgname pve
content images,rootdir

such that I added the line with node names of the rest of the nodes(the problem was with the node proxmox2)

lvmthin: local-lvm
thinpool data
vgname pve
content images,rootdir
nodes proxmox1,proxmox3

and then the not functioning storage disappeared from the node and I could also migrate VMs.
The rest of the nodes do not have ZFS and use LVM instead as per basic installation.

PS: This could be possibly achieved also in datacenter view in the storage area where for each storage you should be able to choose for which node they should be present.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!