[SOLVED] Conflict node local-zfs

Ivan Gersi

Well-Known Member
May 29, 2016
79
6
48
53
I have pve1 node with 4TB disk with fresh instalation local + local-zfs. I`ve made another node with 1TB disk with local and local-zfs storage (classic 100GB for local, rest for local-zfs).
When I made a cluster and join node2 to node1, node 2 has a problem with own local-zfs...I can see local-zfs(pve2) with question mark.
How is the best fix for thiss issue? E.g. I can resize local from 100GB to 1TB in pve2 or there is better to add new ZFS storage in cluster?
I still can see LVM-Thin in pve2 data.

root@pve1:~# df -h
Filesystem Size Used Avail Use% Mounted on
udev 3.9G 0 3.9G 0% /dev
tmpfs 794M 8.3M 786M 2% /run
rpool/ROOT/pve-1 3.5T 38G 3.5T 2% /
tmpfs 3.9G 66M 3.9G 2% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
rpool 3.5T 128K 3.5T 1% /rpool
rpool/ROOT 3.5T 128K 3.5T 1% /rpool/ROOT
rpool/data 3.5T 128K 3.5T 1% /rpool/data
//172.16.0.15/Public 3.6T 764G 2.9T 21% /mnt/pve/NAS
/dev/fuse 128M 24K 128M 1% /etc/pve
tmpfs 794M 0 794M 0% /run/user/0
pve2 has lvm structure.
root@pve2:~# vgs
VG #PV #LV #SN Attr VSize VFree
pve 1 3 0 wz--n- <930.51g <16.00g
root@pve2:~# pvs
PV VG Fmt Attr PSize PFree
/dev/sda3 pve lvm2 a-- <930.51g <16.00g
root@pve2:~# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
data pve twi-a-tz-- 794.66g 0.00 0.24
root pve -wi-ao---- 96.00g
swap pve -wi-ao---- 7.63g

What`s the best way guys?


Edit: I`ve migrate local-zfs to local and mounted to /var/lib/vz..., everything was right, but after reboot I can`t login to pve2. It has IP from dhcp-server, icmp working, but sshd not answer. It`s remote machine so I`ll have to wait from info from monitor.
 
Last edited:
There is strange issue. If I`d try to mount data via fstab Proxmox won`t boot up .
I`ve tried to set up in fstab /dev/pve/data /var/lib/vz ext4 defaults 0 2, hen without ext4 and finally only mount /dev/pve/dat /var/lib/vz.
Proxmox cras during boot because it can bre able mount this volume data.
But...when I remove this volume from fstab OS boot properly and If I want mount this volume via shell there is no problem to do it,

root@pve2:/etc# mount /dev/pve/data /var/lib/vz
root@pve2:/etc# df -h
Filesystem Size Used Avail Use% Mounted on
udev 3.8G 0 3.8G 0% /dev
tmpfs 782M 1.2M 781M 1% /run
/dev/mapper/pve-root 94G 2.7G 87G 4% /
tmpfs 3.9G 66M 3.8G 2% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
/dev/fuse 128M 24K 128M 1% /etc/pve
tmpfs 782M 0 782M 0% /run/user/0
/dev/mapper/pve-data 782G 44K 742G 1% /var/lib/vz

i`m little confused with this issue.
Lvs is classic.
root@pve2:/var/log# lvs
File descriptor 9 (pipe:[25178]) leaked on lvs invocation. Parent PID 1448: bash
File descriptor 11 (pipe:[25179]) leaked on lvs invocation. Parent PID 1448: bash
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
data pve twi-aotz-- 794.66g 0.00 0.24
root pve -wi-ao---- 96.00g
swap pve -wi-ao---- 7.63g
root@pve2:/var/log# lvdisplay
File descriptor 9 (pipe:[25178]) leaked on lvdisplay invocation. Parent PID 1448: bash
File descriptor 11 (pipe:[25179]) leaked on lvdisplay invocation. Parent PID 1448: bash
--- Logical volume ---
LV Path /dev/pve/swap
LV Name swap
VG Name pve
LV UUID rnt2fb-wXgm-lXem-aaGE-EBeL-CYEQ-p5sFEG
LV Write Access read/write
LV Creation host, time proxmox, 2023-05-02 17:04:33 +0200
LV Status available
# open 2
LV Size 7.63 GiB
Current LE 1954
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:0

--- Logical volume ---
LV Path /dev/pve/root
LV Name root
VG Name pve
LV UUID zo0W7u-9rA6-Y7vq-dbX0-GSf7-3iwD-2pBY5f
LV Write Access read/write
LV Creation host, time proxmox, 2023-05-02 17:04:33 +0200
LV Status available
# open 1
LV Size 96.00 GiB
Current LE 24576
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:1

--- Logical volume ---
LV Name data
VG Name pve
LV UUID QYlvj0-EoRn-IOkN-myG7-RnA9-jX6j-i2dPNq
LV Write Access read/write
LV Creation host, time proxmox, 2023-05-02 17:04:54 +0200
LV Pool metadata data_tmeta
LV Pool data data_tdata
LV Status available
# open 1
LV Size 794.66 GiB
Allocated pool data 0.00%
Allocated metadata 0.24%
Current LE 203433
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:4
 
AFAIU you have 2 nodes, one installed with ZFS as root FS and the other with XFS or EXT4 which means, they use LVM on the root disk.

When a node joins a cluster, quite a few config files are overwritten with the ones from the cluster. The storage.cfg is one of them. Therefore, the configuration for the "local-lvm" config for node 2 was lost. Since "local-zfs" does not have a node limitation, Proxmox VE tries to activate that storage on node 2 as well.

Ideally the nodes in the cluster are set up the same way. In your situation, you could limit "local-zfs" just to node 1. And add another storage of type LVM-thin and call it "local-lvm". It should point to the "pve/data" LV. It should be visible if you open the GUI on node 2 for that action. Don't forget to limit "local-lvm" to node 2, otherwise you will see the question mark on node 1 ;)
 
  • Like
Reactions: Ivan Gersi
AarIon I think you``re probably right but tis is my theory...I`ve made node 1 a few weeks ago so I can`t remember configuration exactly... I thought there was the same, but nod1 has 4TB zfs-local and node2 only 1TB. Cluster wanted only one zfs-local but node2 hasn`t enough capacity.and this was the reason why node2 has zfs-local with question mark and node1 has 2 volumes with the same capacity (local and zfs-local) now.
What`s the best scenario for me now? Remove zfs-local from node1 and let only local and make the same way in node2?
There is a little problem...I have to be careful because node1 is in the customer rack a there is one VM (running on local-zfs) and node2 is at my home now;o).
Som I`m going to try to fix it.

Edit: Proxmox has a special policy...in older version user has after instalation local and local-lvm (or local-thin). In newer version has user after fresh instalation only 100GB local and has make to another volume after instalation. In newest version ( think v7) user has classic local a localz-zfs volume in default...and this can be cinfusing.
I have several v7 clusters but all cluster was upgraded from v6 a they was upgraded from v5....etc. I never made cluster from v7 instantly.
 
Last edited:
aaron thanks for hint...this can be problem...
root@pve1:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 3.6T 0 disk
├─sda1 8:1 0 1007K 0 part
├─sda2 8:2 0 512M 0 part
└─sda3 8:3 0 3.6T 0 part
sdb 8:16 0 3.6T 0 disk
├─sdb1 8:17 0 1007K 0 part
├─sdb2 8:18 0 512M 0 part
└─sdb3 8:19 0 3.6T 0 part
sr0 11:0 1 1024M 0 rom
zd0 230:0 0 500G 0 disk
├─zd0p1 230:1 0 549M 0 part
└─zd0p2 230:2 0 499.5G 0 part
and pve2
root@pve2:/etc/pve# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 931.5G 0 disk
├─sda1 8:1 0 1007K 0 part
├─sda2 8:2 0 1G 0 part
└─sda3 8:3 0 930.5G 0 part
├─pve-swap 253:0 0 7.6G 0 lvm [SWAP]
├─pve-root 253:1 0 96G 0 lvm /
├─pve-data_tmeta 253:2 0 8.1G 0 lvm
│ └─pve-data 253:4 0 794.7G 0 lvm
└─pve-data_tdata 253:3 0 794.7G 0 lvm
└─pve-data 253:4 0 794.7G 0 lvm
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!