[SOLVED] Adding SSD's to a Host

epretorious

New Member
Jan 19, 2024
21
3
3
practicalxenserver.info
I've added two 500 GB SSD's to my lab system:

Code:
root@pve-0:~# lsblk
NAME                         MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
sda                            8:0    0 465.8G  0 disk
└─sda1                         8:1    0 465.7G  0 part
sdb                            8:16   0 465.8G  0 disk
└─sdb1                         8:17   0 465.7G  0 part
nvme0n1                      259:0    0 476.9G  0 disk
├─nvme0n1p1                  259:1    0  1007K  0 part
├─nvme0n1p2                  259:2    0     1G  0 part /boot/efi
└─nvme0n1p3                  259:3    0 475.9G  0 part
  ├─pve-swap                 252:0    0     8G  0 lvm  [SWAP]
  ├─pve-root                 252:1    0    96G  0 lvm  /
  ├─pve-data_tmeta           252:2    0   3.6G  0 lvm
  │ └─pve-data-tpool         252:4    0 348.8G  0 lvm
<<<snip>>>

I've formatted the SSD's as LVM physical volumes:

Code:
root@pve-0:~# pvcreate /dev/sda1
  Physical volume "/dev/sda1" successfully created.

root@pve-0:~# pvcreate /dev/sdb1
  Physical volume "/dev/sdb1" successfully created.

I've added the two new PV's to the pve LVM volume group:

Code:
root@pve-0:~# vgextend pve /dev/sda1
  Volume group "pve" successfully extended

root@pve-0:~# vgextend pve /dev/sdb1
  Volume group "pve" successfully extended

But in the UI the local-lvm repository is still shown as the original size (348.8G) even though the pve volume group has been extended from 475.9 GB to 1.37 TB:

Code:
root@pve-0:~# vgdisplay pve
  --- Volume group ---
  VG Name               pve
  System ID          
  Format                lvm2
  Metadata Areas        3
  Metadata Sequence No  47
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                12
  Open LV               2
  Max PV                0
  Cur PV                3
  Act PV                3
  VG Size               1.37 TiB
  PE Size               4.00 MiB
  Total PE              360286
<<<snip>>>

Is it possible to use the two new SSD's to extend the existing local-lvm repository? Or will I need to create new repositories instead?

TIA,
Eric Pretorious
Reno, Nevada
 
Last edited:
Looking a bit more deeply into the LVM logical volumes...

Code:
root@pve-0:~# lsblk
NAME                         MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
<<<snip>>>
nvme0n1                      259:0    0 476.9G  0 disk
├─nvme0n1p1                  259:1    0  1007K  0 part
├─nvme0n1p2                  259:2    0     1G  0 part /boot/efi
└─nvme0n1p3                  259:3    0 475.9G  0 part
  ├─pve-swap                 252:0    0     8G  0 lvm  [SWAP]
  ├─pve-root                 252:1    0    96G  0 lvm  /
  ├─pve-data_tmeta           252:2    0   3.6G  0 lvm
  │ └─pve-data-tpool         252:4    0 348.8G  0 lvm
  │   ├─pve-data             252:5    0 348.8G  1 lvm
  │   ├─pve-vm--200--disk--0 252:6    0    32G  0 lvm
<<<snip>>>
  └─pve-data_tdata           252:3    0 348.8G  0 lvm
    └─pve-data-tpool         252:4    0 348.8G  0 lvm
      ├─pve-data             252:5    0 348.8G  1 lvm
      ├─pve-vm--200--disk--0 252:6    0    32G  0 lvm
<<<snip>>>

I can see that I need to add that capacity to the logical volume that backs the local-lvm. So that's just what I did:

Code:
root@pve-0:~# lvextend -L1200G /dev/pve/data
  Size of logical volume pve/data_tdata changed from <348.82 GiB (89297 extents) to 1.17 TiB (307200 extents).
  Logical volume pve/data successfully resized.
 
root@pve-0:~# lsblk
NAME                         MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
sda                            8:0    0 465.8G  0 disk
└─sda1                         8:1    0 465.7G  0 part
  └─pve-data_tdata           252:3    0   1.2T  0 lvm
    └─pve-data-tpool         252:4    0   1.2T  0 lvm
      ├─pve-data             252:5    0   1.2T  1 lvm
      ├─pve-vm--200--disk--0 252:6    0    32G  0 lvm
<<<snip>>>
sdb                            8:16   0 465.8G  0 disk
└─sdb1                         8:17   0 465.7G  0 part
  └─pve-data_tdata           252:3    0   1.2T  0 lvm
    └─pve-data-tpool         252:4    0   1.2T  0 lvm
      ├─pve-data             252:5    0   1.2T  1 lvm
      ├─pve-vm--200--disk--0 252:6    0    32G  0 lvm
<<<snip>>>
nvme0n1                      259:0    0 476.9G  0 disk
├─nvme0n1p1                  259:1    0  1007K  0 part
├─nvme0n1p2                  259:2    0     1G  0 part /boot/efi
└─nvme0n1p3                  259:3    0 475.9G  0 part
  ├─pve-swap                 252:0    0     8G  0 lvm  [SWAP]
  ├─pve-root                 252:1    0    96G  0 lvm  /
  ├─pve-data_tmeta           252:2    0   3.6G  0 lvm
  │ └─pve-data-tpool         252:4    0   1.2T  0 lvm
  │   ├─pve-data             252:5    0   1.2T  1 lvm
  │   ├─pve-vm--200--disk--0 252:6    0    32G  0 lvm
<<<snip>>>
  └─pve-data_tdata           252:3    0   1.2T  0 lvm
    └─pve-data-tpool         252:4    0   1.2T  0 lvm
      ├─pve-data             252:5    0   1.2T  1 lvm
      ├─pve-vm--200--disk--0 252:6    0    32G  0 lvm
<<<snip>>>

I'm certain that I'm in over my head with Linux LVM. But it seems to work: The UI correctly reports that local-lvm has a capacity of 1.29 TB!

Eric P.
 
Last edited:
Last edited:
  • Like
Reactions: epretorious

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!