Partition Addition / Expansion

SamboNZ

New Member
Feb 10, 2021
8
2
1
Environment:
- Proxmox v6.3-2
- Brand new installation
- 3 Nodes configured in a Cluster

Setup:
- Configured a 64GB partition on a 900GB local drive during installation

Aim:
- I want to add another VLM-Thin partition for VM disk storage on the free space on the existing disk

Issues:
- I am an IT veteran but a Linux / Proxmox newbie
- I've read the available documentation but the required process is not clear to me
- Similar threads / articles that I can find do not describe my specific situation or are not clear enough
- I do not want to reinstall

I selected a 64GB partition during installation on the (mistaken?) understanding that I was specifying the size of the 'system' partitions and that I could simply create another partition for data storage.

How can I do this?

Code:
root@Proxmox1:~# fdisk -l
Disk /dev/sda: 838.3 GiB, 900151926784 bytes, 1758109232 sectors
Disk model: LOGICAL VOLUME 
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 262144 bytes / 262144 bytes
Disklabel type: gpt
Disk identifier: 9302129F-0518-4747-B459-F06B9BFA7869

Device       Start       End   Sectors  Size Type
/dev/sda1       34      2047      2014 1007K BIOS boot
/dev/sda2     2048   1050623   1048576  512M EFI System
/dev/sda3  1050624 134217728 133167105 63.5G Linux LVM

Partition 1 does not start on physical sector boundary.


Disk /dev/mapper/pve-swap: 7.9 GiB, 8455716864 bytes, 16515072 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 262144 bytes / 262144 bytes


Disk /dev/mapper/pve-root: 15.8 GiB, 16911433728 bytes, 33030144 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 262144 bytes / 262144 bytes

Code:
root@Proxmox1:~# lvdisplay
  --- Logical volume ---
  LV Path                /dev/pve/swap
  LV Name                swap
  VG Name                pve
  LV UUID                idAMcg-DF2d-LOYN-9WgM-Ut9q-UT4I-yy61on
  LV Write Access        read/write
  LV Creation host, time proxmox, 2021-01-07 16:49:05 +1300
  LV Status              available
  # open                 2
  LV Size                <7.88 GiB
  Current LE             2016
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0
  
  --- Logical volume ---
  LV Path                /dev/pve/root
  LV Name                root
  VG Name                pve
  LV UUID                pcQAMe-ayFa-pDvp-oMcu-vOIL-siEb-qwrNOE
  LV Write Access        read/write
  LV Creation host, time proxmox, 2021-01-07 16:49:05 +1300
  LV Status              available
  # open                 1
  LV Size                15.75 GiB
  Current LE             4032
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:1
  
  --- Logical volume ---
  LV Name                data
  VG Name                pve
  LV UUID                EhmPAk-V92l-AA95-6B9G-dDsH-6msG-knGyeH
  LV Write Access        read/write
  LV Creation host, time proxmox, 2021-01-07 16:49:06 +1300
  LV Pool metadata       data_tmeta
  LV Pool data           data_tdata
  LV Status              available
  # open                 0
  LV Size                <30.00 GiB
  Allocated pool data    0.00%
  Allocated metadata     1.58%
  Current LE             7679
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:4

Code:
root@Proxmox1:~# cat /etc/pve/storage.cfg
dir: local
        path /var/lib/vz
        content iso,vztmpl,backup

lvmthin: local-lvm
        thinpool data
        vgname pve
        content rootdir,images
 
- I want to add another VLM-Thin partition for VM disk storage on the free space on the existing disk
From reading your question, it sounds like you may prefer to just extend your current data volume. To do that:
1. Create a new partition on your disk, using the empty space. I'd recommend fdisk [1] for this.
2. Create a physical volume on the new partition, e.g. /dev/sda4: pvcreate /dev/sda4.
3. Extend the pve volume group (which contains root and data) with the new volume: vgextend pve /dev/sda4.
4. Extend the "data" logical volume: lvextend -L +xG /dev/pve/data (where x is size in Gb).

You can also repeat step 4 to expand your root partition. You'll just need to resize the filesystem after with resize2fs [2].
To get a clearer idea on LVM, I'd recommend reading this [3].

[1] https://man7.org/linux/man-pages/man8/fdisk.8.html
[2] https://linux.die.net/man/8/resize2fs
[3] https://opensource.com/business/16/9/linux-users-guide-lvm
 
Last edited:
Thanks @dylanw for that comprehensive answer.

It was my intention to use the free space (after root and swap) on the 64GB partition for ISO etc storage and create a second partition which would be purely for VM storage.

Can I convert the existing 30GB volume to ISO storage and create a second partition using the remaining disk space for VLM-Thin storage?
 
Happy to help :)

What I would suggest is following steps 1-3 above, to extend the volume group with a new partition, then create a new iso-storage volume in the pool by:
1. Create a new logical volume for the ISO storage: lvcreate -L 30G --name iso-storage pve.
2. Make a file system on the volume in order to store ISOs: mkfs -t ext4 /dev/pve/iso-storage.
3. Mount it somewhere: mount /dev/mapper/pve-iso--storage /mnt/iso/.
4. Add it as PVE storage: pvesm add dir iso-storage --path /mnt/iso --content iso,vztmpl

Then use step 4 to extend the local-data volume to the maximum size with lvextend -l +100%FREE /dev/pve/data. This prevents the need to convert the data volume into a file system for the iso and then go through the process of creating a new one for vm-data again.
 
Thanks again @dylanw, that did the trick!

For future reference, here are all the commands I used:

Code:
fdisk /dev/sda
p
n
<enter> x 3
p
w

pvcreate /dev/sda4

vgextend pve /dev/sda4

lvcreate -L 30G --name iso-storage pve

mkfs -t ext4 /dev/pve/iso-storage

mkdir /mnt/iso

mount /dev/mapper/pve-iso--storage /mnt/iso/

pvesm add dir iso-storage --path /mnt/iso --content iso,vztmpl

lvextend -l +100%FREE /dev/pve/data

I understand the Proxmox disk/file system structure much more clearly now, but I have a couple of additional questions:
  1. Will the new iso-storage volume auto mount?
  2. I notice that some volumes only show at a datacenter / cluster level. At which point in this process were we making changes to the cluster vs the node?
  3. How does the cluster level storage work with each node? eg; if I upload something to the iso-storage volume, where is it physically stored?
Thanks!
 
  • Like
Reactions: jps
Glad it worked for you, and thanks for posting the commands :)

Will the new iso-storage volume auto mount?
Ah sorry, I left this out. You'll have to add an entry for it in /etc/fstab. Something like: echo "/dev/pve/iso-storage /mnt/iso ext4 defaults 0 2" >> /etc/fstab should do it.

I notice that some volumes only show at a datacenter / cluster level. At which point in this process were we making changes to the cluster vs the node?
The storage options on the node typically refer just to that node. So there you'll see the disks that are directly attached to a node and have the options to create file systems on that node. On the datacenter/cluster level, this is extended with options concerning which nodes have access to the storage.

How does the cluster level storage work with each node? eg; if I upload something to the iso-storage volume, where is it physically stored?
It will be physically stored on the node which the storage was mounted on. Each node in a cluster shares /etc/pve/storage.cfg, so they are aware of each storage device and which nodes have access to it. It can be helpful to share storage between nodes for the sake of shared resources and VM migration, but generally, if you're going to set up cluster storage, you should go with either ceph, or some kind of network file system, so that each node has actual access. The others such as a directory won't provide actual shared access.
 
Ah sorry, I left this out. You'll have to add an entry for it in /etc/fstab. Something like: echo "/dev/pve/iso-storage /mnt/iso ext4 defaults 0 2" >> /etc/fstab should do it.

Interestingly, after a reboot the iso-storage volume appears to be mounted and working even without this additional auto-mount command.
 
Are you sure that the volume you see isn't referring to the same storage as "local"? Check to see if the usage on both matches up.
In general since a "Directory" storage just refers to a directory on the system, if there is nothing mounted there, the entry in /etc/pve/storage.cfg will still be valid, but will just refer to a different storage device.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!