problem with thin-volum

bensz

Member
Dec 13, 2020
15
0
6
49
Hello,
I have a little problem to understand and to manage my lvm. I have 3 disk on my node. Sda 1To (with system), sdb and sdc 2To for vm.
For the moment I only have 2To available from sdb for VM.
here is a vgdisplay:
Code:
vgdisplay
  --- Volume group ---
  VG Name               pve
  System ID            
  Format                lvm2
  Metadata Areas        3
  Metadata Sequence No  32
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                8
  Open LV               7
  Max PV                0
  Cur PV                3
  Act PV                3
  VG Size               <4.55 TiB
  PE Size               4.00 MiB
  Total PE              1192202
  Alloc PE / Size       627460 / 2.39 TiB
  Free  PE / Size       564742 / 2.15 TiB
  VG UUID               AVnlQd-LLi1-uzBS-HF1y-1sTC-d2HI-LQCb9J
Code:
root@lucky:~# lvdisplay
  --- Logical volume ---
  LV Path                /dev/pve/swap
  LV Name                swap
  VG Name                pve
  LV UUID                cGW1tk-9g47-E6bx-Y1o5-uGqQ-EXZP-4P8fP0
  LV Write Access        read/write
  LV Creation host, time proxmox, 2020-08-16 13:50:28 +0200
  LV Status              available
  # open                 2
  LV Size                8.00 GiB
  Current LE             2048
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0
  
  --- Logical volume ---
  LV Path                /dev/pve/root
  LV Name                root
  VG Name                pve
  LV UUID                nJyNO7-WXFg-Akqu-VeXR-Fn4W-T7XC-rstcZg
  LV Write Access        read/write
  LV Creation host, time proxmox, 2020-08-16 13:50:28 +0200
  LV Status              available
  # open                 1
  LV Size                96.00 GiB
  Current LE             24576
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:1
  
  --- Logical volume ---
  LV Name                data
  VG Name                pve
  LV UUID                vs6TJJ-F9dm-ErdH-z50h-qIcq-6c7S-MJ42Xk
  LV Write Access        read/write
  LV Creation host, time proxmox, 2020-08-16 13:50:29 +0200
  LV Pool metadata       data_tmeta
  LV Pool data           data_tdata
  LV Status              available
  # open                 6
  LV Size                <2.28 TiB
  Allocated pool data    77.50%
  Allocated metadata     10.94%
  Current LE             596682
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:4
  
  --- Logical volume ---
  LV Path                /dev/pve/vm-102-disk-0
  LV Name                vm-102-disk-0
  VG Name                pve
  LV UUID                c6CA5h-dBTF-R0FT-zKfo-mXiT-azT5-EoyCvM
  LV Write Access        read/write
  LV Creation host, time lucky, 2020-09-09 17:27:43 +0200
  LV Pool name           data
  LV Status              available
  # open                 1
  LV Size                <5.86 TiB
  Mapped size            28.91%
  Current LE             1536000
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:6
  
  --- Logical volume ---
  LV Path                /dev/pve/vm-100-disk-0
  LV Name                vm-100-disk-0
  VG Name                pve
  LV UUID                2BNK2K-v12y-3FRx-ZTTu-TH9v-sA5y-L61leU
  LV Write Access        read/write
  LV Creation host, time lucky, 2020-10-01 21:43:32 +0200
  LV Pool name           data
  LV Status              available
  # open                 1
  LV Size                32.00 GiB
  Mapped size            54.46%
  Current LE             8192
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:7
  
  --- Logical volume ---
  LV Path                /dev/pve/vm-103-disk-0
  LV Name                vm-103-disk-0
  VG Name                pve
  LV UUID                1GCnkH-TQlp-GTEl-qkhb-V3Tx-9RVM-u2rESi
  LV Write Access        read/write
  LV Creation host, time lucky, 2020-10-17 18:04:34 +0200
  LV Pool name           data
  LV Status              available
  # open                 1
  LV Size                32.00 GiB
  Mapped size            16.26%
  Current LE             8192
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:8
  
  --- Logical volume ---
  LV Path                /dev/pve/vm-104-disk-0
  LV Name                vm-104-disk-0
  VG Name                pve
  LV UUID                jo1IOO-LIws-E4a6-KOav-m1nW-3PHX-0Gs2eg
  LV Write Access        read/write
  LV Creation host, time lucky, 2020-10-24 18:23:50 +0200
  LV Pool name           data
  LV Status              available
  # open                 1
  LV Size                1000.00 GiB
  Mapped size            2.87%
  Current LE             256000
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:9
  
  --- Logical volume ---
  LV Path                /dev/pve/vm-105-disk-0
  LV Name                vm-105-disk-0
  VG Name                pve
  LV UUID                H9j9ZL-1O5z-CXTz-nReK-24k7-aAEe-bgWsNo
  LV Write Access        read/write
  LV Creation host, time lucky, 2020-10-25 09:33:37 +0100
  LV Pool name           data
  LV Status              available
  # open                 1
  LV Size                1000.00 GiB
  Mapped size            2.04%
  Current LE             256000
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:10

So, how can I manage my disks to have avalaible space in my thin-pool.
And I don't understand what's inside the "data" thin-pool
thanks for help
Benoit
 

Attachments

  • disk.png
    disk.png
    83.9 KB · Views: 4
  • lvm.png
    lvm.png
    103.3 KB · Views: 4
  • lvm-thin.png
    lvm-thin.png
    95.7 KB · Views: 3
  • stockage.png
    stockage.png
    80.7 KB · Views: 4
Last edited:
So, how can I manage my disks to have avalaible space in my thin-pool.
And I don't understand what's inside the "data" thin-pool

By default the PVE installer creates (when not installing to ZFS) an LVM pool that contains the root fs, swap and a thin pool (data) which is used for the guests and shows up in the storage configuration as `local-lvm`.

If you run lvs you should see more details and all the disks for guests should use the Pool data.

I highly recommend to not have several disk in one LVM as you basically create a raid 0 which will fail as soon as one of the disks fails.

I am bit confused right now from the information available how the host is set up. Could you please show the output (in [code][/code] tags) of the following commands?
Code:
pvs
vgs
lvs
lsblk
 
Hi,
Thanks for your reply,
Code:
root@lucky:~# pvs
  PV         VG  Fmt  Attr PSize    PFree   
  /dev/sda3  pve lvm2 a--  <931.01g       0 
  /dev/sdb   pve lvm2 a--    <1.82t   <1.82t
  /dev/sdc   pve lvm2 a--    <1.82t <343.01g
Code:
root@lucky:~# vgs
  VG  #PV #LV #SN Attr   VSize  VFree
  pve   3   8   0 wz--n- <4.55t 2.15t
Code:
root@lucky:~# lvs
  LV            VG  Attr       LSize    Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  data          pve twi-aotz--   <2.28t             77.52  10.94                           
  root          pve -wi-ao----   96.00g                                                    
  swap          pve -wi-ao----    8.00g                                                    
  vm-100-disk-0 pve Vwi-aotz--   32.00g data        55.46                                  
  vm-102-disk-0 pve Vwi-aotz--   <5.86t data        28.91                                  
  vm-103-disk-0 pve Vwi-aotz--   32.00g data        16.26                                  
  vm-104-disk-0 pve Vwi-aotz-- 1000.00g data        2.87                                   
  vm-105-disk-0 pve Vwi-aotz-- 1000.00g data        2.04
Code:
root@lucky:~# lsblk
NAME                         MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda                            8:0    0 931.5G  0 disk 
├─sda1                         8:1    0  1007K  0 part 
├─sda2                         8:2    0   512M  0 part 
└─sda3                         8:3    0   931G  0 part 
  ├─pve-swap                 253:0    0     8G  0 lvm  [SWAP]
  ├─pve-root                 253:1    0    96G  0 lvm  /
  ├─pve-data_tmeta           253:2    0   8.1G  0 lvm  
  │ └─pve-data-tpool         253:4    0   2.3T  0 lvm  
  │   ├─pve-data             253:5    0   2.3T  0 lvm  
  │   ├─pve-vm--102--disk--0 253:6    0   5.9T  0 lvm  
  │   ├─pve-vm--100--disk--0 253:7    0    32G  0 lvm  
  │   ├─pve-vm--103--disk--0 253:8    0    32G  0 lvm  
  │   ├─pve-vm--104--disk--0 253:9    0  1000G  0 lvm  
  │   └─pve-vm--105--disk--0 253:10   0  1000G  0 lvm  
  └─pve-data_tdata           253:3    0   2.3T  0 lvm  
    └─pve-data-tpool         253:4    0   2.3T  0 lvm  
      ├─pve-data             253:5    0   2.3T  0 lvm  
      ├─pve-vm--102--disk--0 253:6    0   5.9T  0 lvm  
      ├─pve-vm--100--disk--0 253:7    0    32G  0 lvm  
      ├─pve-vm--103--disk--0 253:8    0    32G  0 lvm  
      ├─pve-vm--104--disk--0 253:9    0  1000G  0 lvm  
      └─pve-vm--105--disk--0 253:10   0  1000G  0 lvm  
sdb                            8:16   0   1.8T  0 disk 
sdc                            8:32   0   1.8T  0 disk 
└─pve-data_tdata             253:3    0   2.3T  0 lvm  
  └─pve-data-tpool           253:4    0   2.3T  0 lvm  
    ├─pve-data               253:5    0   2.3T  0 lvm  
    ├─pve-vm--102--disk--0   253:6    0   5.9T  0 lvm  
    ├─pve-vm--100--disk--0   253:7    0    32G  0 lvm  
    ├─pve-vm--103--disk--0   253:8    0    32G  0 lvm  
    ├─pve-vm--104--disk--0   253:9    0  1000G  0 lvm  
    └─pve-vm--105--disk--0   253:10   0  1000G  0 lvm

I didn't tough about the raid0 problem, but you're right. So I prefer to have one volume per disk.
Thank you
Benoit
 
I didn't tough about the raid0 problem, but you're right. So I prefer to have one volume per disk.
If you can't use them in a mirrored setup because you need the space, but can separate them, you will still be better off if one disk fails as it will not cause a complete failure of the host.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!