Trouble extending lvm-thin whith a new disk

Aug 29, 2019
2
0
1
62
Hi,
I want to add a extra disk to my lvm-thin storage, here what I did :

Format the new disk cfdisk /dev/sdd
(One primary partition )
Code:
Model: ATA WDC WD6002FRYZ-0 (scsi)
Disk /dev/sdd: 6001GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags:

Number  Start   End     Size    File system  Name  Flags
 1      1049kB  6001GB  6001GB                     lvm

Then pvcreate /dev/sdd1

Extend volume group vgextend vmdata /dev/sdd1

Now I have :
Code:
vgdisplay vmdata
  --- Volume group ---
  VG Name               vmdata
  System ID             
  Format                lvm2
  Metadata Areas        2
  Metadata Sequence No  517
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                40
  Open LV               4
  Max PV                0
  Cur PV                2
  Act PV                2
  VG Size               <9.10 TiB
  PE Size               4.00 MiB
  Total PE              2384652
  Alloc PE / Size       2384652 / <9.10 TiB
  Free  PE / Size       0 / 0   
  VG UUID               lhtyEH-EYaQ-Od6L-FHvG-Qr6K-1YbD-K4dX4o

So I tried to extend lvm-thin :
lvresize --size 9.10T --poolmetadatasize 9.10T vmdata/vmstore

but I still get lv size of 3.70T:

Code:
lvdisplay vmdata/vmstore
  --- Logical volume ---
  LV Name                vmstore
  VG Name                vmdata
  LV UUID                MhJUXw-ZrWF-xJhW-9UXF-iw1b-skRk-KS91no
  LV Write Access        read/write
  LV Creation host, time pve, 2019-03-08 11:01:20 +0100
  LV Pool metadata       vmstore_tmeta
  LV Pool data           vmstore_tdata
  LV Status              available
  # open                 28
  LV Size                <3.70 TiB
  Allocated pool data    33.13%
  Allocated metadata     0.24%
  Current LE             969014
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:2

How can I can solve this ?
Thanks
 
Extend volume group vgextend vmdata /dev/sdd1
Please note that this is not too safe - if any of your two disks has a problem you lose all data (it behaves more or less like RAID0, without the speed improvment).

please post the output of
`pvs -a`
`vgs -a`
`lvs -a`

you probably don't want to give the whole size to poolmetadata - see `man lvmthin`

Hope this helps!
 
Please note that this is not too safe - if any of your two disks has a problem you lose all data (it behaves more or less like RAID0, without the speed improvment).
Ok, understood.

please post the output of
`pvs -a`
`vgs -a`
`lvs -a`

Code:
# pvs -a
  PV                                              VG     Fmt  Attr PSize    PFree
  /dev/sda2                                                   ---        0     0
  /dev/sda3                                       pve    lvm2 a--  <465.26g    0
  /dev/sdb1                                       vmdata lvm2 a--    <3.64t    0
  /dev/sdc1                                                   ---        0     0
  /dev/sdd1                                       vmdata lvm2 a--    <5.46t    0
  /dev/sde1                                                   ---        0     0
  /dev/sde9                                                   ---        0     0
  /dev/vmdata/vm-100-state-v2_4_1_run                         ---        0     0
  /dev/vmdata/vm-100-state-v2_4_webng                         ---        0     0
  /dev/vmdata/vm-103-state-suspend-2019-04-18                 ---        0     0
  /dev/vmdata/vm-106-state-v8_9_1_0                           ---        0     0
  /dev/vmdata/vm-111-state-avant_test_FA_2019_027             ---        0     0


# vgs -a
  VG     #PV #LV #SN Attr   VSize    VFree
  pve      1   2   0 wz--n- <465.26g    0
  vmdata   2  26   0 wz--n-   <9.10t    0

# lvs -a
  LV                                        VG     Attr       LSize    Pool    Origin                                    Data%  Meta%  Move Log Cpy%Sync Convert
  root                                      pve    -wi-ao---- <457.26g                                                                                         
  swap                                      pve    -wi-ao----    8.00g                                                                                         
  [lvol0_pmspare]                           vmdata ewi-------  120.00m                                                                                         
  snap_vm-100-disk-0_v2_4_1_run             vmdata Vri---tz-k   27.00g vmstore                                                                                 
  snap_vm-100-disk-0_v2_4_webng             vmdata Vri---tz-k   27.00g vmstore vm-100-disk-0                                                                   
  snap_vm-100-disk-1_v2_4_1_run             vmdata Vri---tz-k   15.00g vmstore                                                                                 
  snap_vm-100-disk-1_v2_4_webng             vmdata Vri---tz-k   15.00g vmstore vm-100-disk-1                                                                   
  snap_vm-100-disk-2_v2_4_1_run             vmdata Vri---tz-k   50.00g vmstore                                                                                 
  snap_vm-100-disk-2_v2_4_webng             vmdata Vri---tz-k   50.00g vmstore vm-100-disk-2                                                                   
  snap_vm-105-disk-0_V241                   vmdata Vri---tz-k  500.00g vmstore vm-105-disk-0                                                                   
  snap_vm-106-disk-0_v8_9_1_0               vmdata Vri---tz-k  233.00g vmstore                                                                                 
  snap_vm-111-disk-0_Ouverture_pare_feu     vmdata Vri---tz-k  250.00g vmstore                                                                                 
  snap_vm-111-disk-0_avant_test_FA_2019_027 vmdata Vri---tz-k  250.00g vmstore                                                                                 
  vm-100-disk-0                             vmdata Vwi-a-tz--   27.00g vmstore                                           100.00                                 
  vm-100-disk-1                             vmdata Vwi-a-tz--   15.00g vmstore                                           100.00                                 
  vm-100-disk-2                             vmdata Vwi-a-tz--   50.00g vmstore                                           100.00                                 
  vm-100-state-v2_4_1_run                   vmdata Vwi-a-tz--  <12.49g vmstore                                           22.04                                 
  vm-100-state-v2_4_webng                   vmdata Vwi-a-tz--  <12.49g vmstore                                           44.98                                 
  vm-103-disk-0                             vmdata Vwi-a-tz--   32.00g vmstore                                           40.54                                 
  vm-103-state-suspend-2019-04-18           vmdata Vwi-a-tz--   <8.49g vmstore                                           44.04                                 
  vm-105-disk-0                             vmdata Vwi-a-tz--  500.00g vmstore                                           6.39                                   
  vm-106-disk-0                             vmdata Vwi-a-tz--  233.00g vmstore snap_vm-106-disk-0_v8_9_1_0               99.95                                 
  vm-106-state-v8_9_1_0                     vmdata Vwi-a-tz--   <8.49g vmstore                                           32.95                                 
  vm-107-disk-0                             vmdata Vwi-a-tz--   27.00g vmstore                                           62.02                                 
  vm-107-disk-1                             vmdata Vwi-a-tz--   15.00g vmstore                                           13.55                                 
  vm-107-disk-2                             vmdata Vwi-a-tz--   50.00g vmstore                                           23.55                                 
  vm-111-disk-0                             vmdata Vwi-a-tz--  250.00g vmstore snap_vm-111-disk-0_avant_test_FA_2019_027 100.00                           
  vm-111-state-avant_test_FA_2019_027       vmdata Vwi-a-tz--  <16.49g vmstore                                           24.73                                 
  vmstore                                   vmdata twi-aotz--   <3.70t                                                   19.51  0.19                           
  [vmstore_tdata]                           vmdata Twi-ao----   <3.70t                                                                                         
  [vmstore_tmeta]                           vmdata ewi-ao----    5.40t

you probably don't want to give the whole size to poolmetadata - see `man lvmthin`

hum, yes, you are right.

Can I revert things and remove the extra disk from the volume group ?

Thanks.