[SOLVED] Data isn't shown on node ui

TechHome

Active Member
Apr 12, 2020
40
1
28
Data isn't shown on node sunfish ui. Data is a SSD and it's size should be around 350GB
Code:
root@sunfish:~# root@sunfish:~# pvs
  PV           VG      Fmt  Attr PSize    PFree
  /dev/nvme0n1 lvmthin lvm2 a--  <953.87g 124.00m
  /dev/sda3    pve     lvm2 a--  <476.44g <16.00g
root@sunfish:~# vgs
  VG      #PV #LV #SN Attr   VSize    VFree
  lvmthin   1  17   0 wz--n- <953.87g 124.00m
  pve       1   3   0 wz--n- <476.44g <16.00g
root@sunfish:~# lvs
  LV            VG      Attr       LSize    Pool    Origin Data%  Meta%  Move Log Cpy%Sync Convert
  lvmthin       lvmthin twi-aotz-- <934.67g                27.87  1.55
  vm-102-disk-0 lvmthin Vwi-aotz--   32.00g lvmthin        52.40
  vm-102-disk-1 lvmthin Vwi-aotz--    4.00m lvmthin        3.12
  vm-103-disk-0 lvmthin Vwi-aotz--   50.00g lvmthin        95.30
  vm-104-disk-0 lvmthin Vwi-a-tz--   32.00g lvmthin        33.45
  vm-105-disk-0 lvmthin Vwi-a-tz--    8.00g lvmthin        25.25
  vm-105-disk-2 lvmthin Vwi-aotz--    3.00g lvmthin        99.77
  vm-106-disk-0 lvmthin Vwi-aotz--   40.00g lvmthin        55.65
  vm-109-disk-0 lvmthin Vwi-a-tz--   32.00g lvmthin        29.93
  vm-111-disk-0 lvmthin Vwi-a-tz--   64.00g lvmthin        11.79
  vm-112-disk-0 lvmthin Vwi-aotz--    2.00g lvmthin        95.33
  vm-113-disk-0 lvmthin Vwi-aotz--  111.00g lvmthin        82.04
  vm-115-disk-0 lvmthin Vwi-aotz--   32.00g lvmthin        48.36
  vm-117-disk-0 lvmthin Vwi-aotz--    4.00m lvmthin        0.00
  vm-117-disk-1 lvmthin Vwi-aotz--   26.00g lvmthin        36.93
  vm-118-disk-0 lvmthin Vwi-a-tz--   21.00g lvmthin        8.05
  vm-119-disk-0 lvmthin Vwi-aotz--  600.00g lvmthin        3.54
  data          pve     twi-a-tz-- <349.31g                0.00   0.48
  root          pve     -wi-ao----   96.00g
  swap          pve     -wi-ao----    8.00g
preview
 
What is the output of lsblk and cat /etc/pve/storage.cfg?
 
What is the output of lsblk and cat /etc/pve/storage.cfg?


Code:
root@sunfish:~# lsblk
NAME                           MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda                              8:0    0   477G  0 disk
├─sda1                           8:1    0  1007K  0 part
├─sda2                           8:2    0   512M  0 part /boot/efi
└─sda3                           8:3    0 476.4G  0 part
  ├─pve-swap                   253:0    0     8G  0 lvm  [SWAP]
  ├─pve-root                   253:1    0    96G  0 lvm  /
  ├─pve-data_tmeta             253:2    0   3.6G  0 lvm 
  │ └─pve-data                 253:4    0 349.3G  0 lvm 
  └─pve-data_tdata             253:3    0 349.3G  0 lvm 
    └─pve-data                 253:4    0 349.3G  0 lvm 
nvme0n1                        259:0    0 953.9G  0 disk
├─lvmthin-lvmthin_tmeta        253:5    0   9.6G  0 lvm 
│ └─lvmthin-lvmthin-tpool      253:7    0 934.7G  0 lvm 
│   ├─lvmthin-lvmthin          253:8    0 934.7G  0 lvm 
│   ├─lvmthin-vm--109--disk--0 253:9    0    32G  0 lvm 
│   ├─lvmthin-vm--104--disk--0 253:10   0    32G  0 lvm 
│   ├─lvmthin-vm--102--disk--0 253:11   0    32G  0 lvm 
│   ├─lvmthin-vm--102--disk--1 253:12   0     4M  0 lvm 
│   ├─lvmthin-vm--105--disk--0 253:13   0     8G  0 lvm 
│   ├─lvmthin-vm--103--disk--0 253:14   0    50G  0 lvm 
│   ├─lvmthin-vm--112--disk--0 253:15   0     2G  0 lvm 
│   ├─lvmthin-vm--113--disk--0 253:16   0   111G  0 lvm 
│   ├─lvmthin-vm--111--disk--0 253:17   0    64G  0 lvm 
│   ├─lvmthin-vm--115--disk--0 253:18   0    32G  0 lvm 
│   ├─lvmthin-vm--105--disk--2 253:19   0     3G  0 lvm 
│   ├─lvmthin-vm--117--disk--0 253:20   0     4M  0 lvm 
│   ├─lvmthin-vm--117--disk--1 253:21   0    26G  0 lvm 
│   ├─lvmthin-vm--118--disk--0 253:22   0    21G  0 lvm 
│   ├─lvmthin-vm--119--disk--0 253:23   0   600G  0 lvm 
│   └─lvmthin-vm--106--disk--0 253:24   0    40G  0 lvm 
└─lvmthin-lvmthin_tdata        253:6    0 934.7G  0 lvm 
  └─lvmthin-lvmthin-tpool      253:7    0 934.7G  0 lvm 
    ├─lvmthin-lvmthin          253:8    0 934.7G  0 lvm 
    ├─lvmthin-vm--109--disk--0 253:9    0    32G  0 lvm 
    ├─lvmthin-vm--104--disk--0 253:10   0    32G  0 lvm 
    ├─lvmthin-vm--102--disk--0 253:11   0    32G  0 lvm 
    ├─lvmthin-vm--102--disk--1 253:12   0     4M  0 lvm 
    ├─lvmthin-vm--105--disk--0 253:13   0     8G  0 lvm 
    ├─lvmthin-vm--103--disk--0 253:14   0    50G  0 lvm 
    ├─lvmthin-vm--112--disk--0 253:15   0     2G  0 lvm 
    ├─lvmthin-vm--113--disk--0 253:16   0   111G  0 lvm 
    ├─lvmthin-vm--111--disk--0 253:17   0    64G  0 lvm 
    ├─lvmthin-vm--115--disk--0 253:18   0    32G  0 lvm 
    ├─lvmthin-vm--105--disk--2 253:19   0     3G  0 lvm 
    ├─lvmthin-vm--117--disk--0 253:20   0     4M  0 lvm 
    ├─lvmthin-vm--117--disk--1 253:21   0    26G  0 lvm 
    ├─lvmthin-vm--118--disk--0 253:22   0    21G  0 lvm 
    ├─lvmthin-vm--119--disk--0 253:23   0   600G  0 lvm 
    └─lvmthin-vm--106--disk--0 253:24   0    40G  0 lvm 
root@sunfish:~# cat /etc/pve/storage.cfg
dir: local
        path /var/lib/vz
        content iso,vztmpl,backup
        maxfiles 0
        shared 0

zfspool: local-zfs
        pool rpool/data
        content images,rootdir
        nodes pangolin
        sparse 1

nfs: diskstation
        export /volume1/ProxmoxBackup
        path /mnt/pve/diskstation
        server 192.168.1.55
        content backup,vztmpl,iso
        maxfiles 4

lvmthin: lvmthin
        thinpool lvmthin
        vgname lvmthin
        content rootdir,images

nfs: freenas
        export /mnt/rz10TB/Proxmox
        path /mnt/pve/freenas
        server 192.168.1.46
        content vztmpl,iso,backup
        maxfiles 0
 
Did you add the second node, sunfish, to the cluster after you created the cluster on pangolin?

If that is the case, then the config for the thin LVM was lost because, in the process of joining a cluster, the joining node will lose its content of /etc/pve and use the one from the cluster. Since the two machines are installed in a slightly different way (using ZFS vs not using ZFS as root FS) you ended up in that situation.

To fix this you can add a THIN LVM storage on the sunfish node (you might have to make sure to be connected to the GUI on the sunfish node) and choose the data LV. Limit it to the sunfish node though as the other node does not have it.
 
Did you add the second node, sunfish, to the cluster after you created the cluster on pangolin?

If that is the case, then the config for the thin LVM was lost because, in the process of joining a cluster, the joining node will lose its content of /etc/pve and use the one from the cluster. Since the two machines are installed in a slightly different way (using ZFS vs not using ZFS as root FS) you ended up in that situation.

To fix this you can add a THIN LVM storage on the sunfish node (you might have to make sure to be connected to the GUI on the sunfish node) and choose the data LV. Limit it to the sunfish node though as the other node does not have it.

Yes you're right. I joined with sunfish the cluster from pangolin node. I looked at the pangolin node webui for the storage and found nothing. Found it as said at the sunfish nice webui.
thanks
 
  • Like
Reactions: aaron

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!