I have issues since on my Proxmox cluster since I added a new node with ZFS (RAIDZ2) storage.
I had a 3-node cluster (version 7.3) where each nodes (names: node1, node2 and node3) had a single disk using XFS filesystem, I added a new v7.4 node (name: node4) having 8 disks ZFS RAIDZ2.
Note: After this I upgraded each 7.3 node to version 7.4-3.
As an example of issue: I can not migrate an existing VM from old XFS nodes to the new ZFS node because it can not find the "pve" VG, error is:
I managed to migrate an existing VM having no disk from an old XFS node to the new ZFS but, from this new location, I cannot add a disk to it, I get the following error:
The
I also cannot, on the new ZFS node, create a VM with a disk (on the only available storage which is "local-lvm") as I get similar error:
Creating a VM without disk works fine, but I canot delete it:
(I had tested VM-with-disk creation on the new ZFS node prior to adding it to the cluster: it worked just fine)
Is my cluster setup doomed to not working because one cannot mix LVM and ZFS like this? Or is there a solution/config I failed to find?
More details:
The "Storage" of the cluster only list those 2 (no ZFS) storages:
And the "Disks" section of each nodes are as follow:
I had a 3-node cluster (version 7.3) where each nodes (names: node1, node2 and node3) had a single disk using XFS filesystem, I added a new v7.4 node (name: node4) having 8 disks ZFS RAIDZ2.
Note: After this I upgraded each 7.3 node to version 7.4-3.
As an example of issue: I can not migrate an existing VM from old XFS nodes to the new ZFS node because it can not find the "pve" VG, error is:
I guess the migration process is using LVM features to copy data from one node to another and the new node lacks LVM so it fails.Volume group "pve" not found
I managed to migrate an existing VM having no disk from an old XFS node to the new ZFS but, from this new location, I cannot add a disk to it, I get the following error:
failed to update VM 114: no such logical volume pve/data (500)
The
/etc/pve/qemu-server/114.conf
does not seems to reference any leftover storage configuration:
Code:
boot: order=ide2;net0
cores: 2
ide2: none,media=cdrom
machine: pc-i440fx-7.1
memory: 4096
meta: creation-qemu=7.1.0,ctime=1682077154
name: foo.example.com
net0: e1000=1A:D1:B7:83:19:0D,bridge=vmbr0,firewall=1
numa: 0
ostype: win10
scsihw: virtio-scsi-single
smbios1: uuid=848a35f6-6ca9-4bb0-8ca8-030ed9ad1ab7
sockets: 1
vmgenid: 9c11d306-2ee3-4683-bb6b-9c8f26f2009b
I also cannot, on the new ZFS node, create a VM with a disk (on the only available storage which is "local-lvm") as I get similar error:
TASK ERROR: unable to create VM 118 - no such logical volume pve/data
Creating a VM without disk works fine, but I canot delete it:
TASK ERROR: no such logical volume pve/data
(I had tested VM-with-disk creation on the new ZFS node prior to adding it to the cluster: it worked just fine)
Is my cluster setup doomed to not working because one cannot mix LVM and ZFS like this? Or is there a solution/config I failed to find?
More details:
The "Storage" of the cluster only list those 2 (no ZFS) storages:
- local
- local-lvm
And the "Disks" section of each nodes are as follow:
- For the old XFS nodes (1 disk):
- LVM: pve (/dev/sda3)
- LVM-Thin: data (in "pve" VG)
- Directory: (empty)
- ZFS: (empty)
- For the new ZFS node (8 disks):
- LVM: No VGs found
- LVM-Thin: No thin-Pool found
- Directory: (empty)
- ZFS: rpool
Last edited: