Confusion over media storage / partitions

dew1989

New Member
Nov 5, 2023
4
0
1
Very new to Proxmox and Linux in general, so I am thoroughly confused. First off, my setup is very simple, a single 1tb ssd that I'm wanting to run 4 things on: Home Assistant (running great), plex, qbittorent, and all remaining space (~800gb) as media storage (for plex, and where qbittorrent downloads to). In a windows world, there would be 5 partitions of my single drive (proxmox, HA, plex, qbit, storage). Obviously this is a different world, and I'm stuck. What is the recommended (simple) way for going about this? I have things sort of working, however plex and qbit are only seeing 90gb of available storage.

This is how my setup is currently configured: https://imgur.com/2bCf3p4
output of fdisk - l:
Code:
Disk /dev/nvme0n1: 953.87 GiB, 1024209543168 bytes, 2000409264 sectors
Disk model: SKHynix_HFS001TD9TNG-L5B0B             
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: AAEE369E-2813-4EC9-9724-8F4616660A50

Device           Start        End    Sectors   Size Type
/dev/nvme0n1p1      34       2047       2014  1007K BIOS boot
/dev/nvme0n1p2    2048    2099199    2097152     1G EFI System
/dev/nvme0n1p3 2099200 2000409230 1998310031 952.9G Linux LVM


Disk /dev/mapper/pve-swap: 8 GiB, 8589934592 bytes, 16777216 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/pve-root: 96 GiB, 103079215104 bytes, 201326592 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/pve-vm--100--disk--0: 4 MiB, 4194304 bytes, 8192 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 65536 bytes / 65536 bytes


Disk /dev/mapper/pve-vm--100--disk--1: 32 GiB, 34359738368 bytes, 67108864 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 65536 bytes / 65536 bytes
Disklabel type: gpt
Disk identifier: 9BF6C2A4-014A-49F4-A0E1-80E1C7895F72

Device                                   Start      End  Sectors  Size Type
/dev/mapper/pve-vm--100--disk--1-part1    2048    67583    65536   32M EFI System
/dev/mapper/pve-vm--100--disk--1-part2   67584   116735    49152   24M Linux filesystem
/dev/mapper/pve-vm--100--disk--1-part3  116736   641023   524288  256M Linux filesystem
/dev/mapper/pve-vm--100--disk--1-part4  641024   690175    49152   24M Linux filesystem
/dev/mapper/pve-vm--100--disk--1-part5  690176  1214463   524288  256M Linux filesystem
/dev/mapper/pve-vm--100--disk--1-part6 1214464  1230847    16384    8M Linux filesystem
/dev/mapper/pve-vm--100--disk--1-part7 1230848  1427455   196608   96M Linux filesystem
/dev/mapper/pve-vm--100--disk--1-part8 1427456 67108830 65681375 31.3G Linux filesystem


Disk /dev/mapper/pve-vm--101--disk--0: 4 GiB, 4294967296 bytes, 8388608 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 65536 bytes / 65536 bytes


Disk /dev/mapper/pve-vm--102--disk--0: 4 GiB, 4294967296 bytes, 8388608 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 65536 bytes / 65536 bytes
 
Linux disk partitions are not much different from Windows partitions. However, you are virtualizing all those system, so just use virtual disks. Just give Proxmox all the available storage and create VMs with virtual disks (with appropriate sizes). Don't try to split the real hardware into pieces for VMs; that's not how enterprise hypervisors and virtualization to consolidate services works.
 
Thank you, I'm confused on what I should be resizing. I don't understand what plex/qbit is seeing as ~90gb free?
 
This is how my setup is currently configured: https://imgur.com/2bCf3p4
output of fdisk - l:
Code:
Disk /dev/nvme0n1: 953.87 GiB, 1024209543168 bytes, 2000409264 sectors
Disk model: SKHynix_HFS001TD9TNG-L5B0B            
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: AAEE369E-2813-4EC9-9724-8F4616660A50

Device           Start        End    Sectors   Size Type
/dev/nvme0n1p1      34       2047       2014  1007K BIOS boot
/dev/nvme0n1p2    2048    2099199    2097152     1G EFI System
/dev/nvme0n1p3 2099200 2000409230 1998310031 952.9G Linux LVM


Disk /dev/mapper/pve-swap: 8 GiB, 8589934592 bytes, 16777216 sectors
...

Disk /dev/mapper/pve-root: 96 GiB, 103079215104 bytes, 201326592 sectors
...


Disk /dev/mapper/pve-vm--100--disk--0: 4 MiB, 4194304 bytes, 8192 sectors
...


Disk /dev/mapper/pve-vm--100--disk--1: 32 GiB, 34359738368 bytes, 67108864 sectors
...

Device                                   Start      End  Sectors  Size Type
/dev/mapper/pve-vm--100--disk--1-part1    2048    67583    65536   32M EFI System
/dev/mapper/pve-vm--100--disk--1-part2   67584   116735    49152   24M Linux filesystem
/dev/mapper/pve-vm--100--disk--1-part3  116736   641023   524288  256M Linux filesystem
/dev/mapper/pve-vm--100--disk--1-part4  641024   690175    49152   24M Linux filesystem
/dev/mapper/pve-vm--100--disk--1-part5  690176  1214463   524288  256M Linux filesystem
/dev/mapper/pve-vm--100--disk--1-part6 1214464  1230847    16384    8M Linux filesystem
/dev/mapper/pve-vm--100--disk--1-part7 1230848  1427455   196608   96M Linux filesystem
/dev/mapper/pve-vm--100--disk--1-part8 1427456 67108830 65681375 31.3G Linux filesystem


Disk /dev/mapper/pve-vm--101--disk--0: 4 GiB, 4294967296 bytes, 8388608 sectors
...


Disk /dev/mapper/pve-vm--102--disk--0: 4 GiB, 4294967296 bytes, 8388608 sectors
...
You are not confused about Linux or partitions in the traditional sense, you just had the ISO install done in its own opinionated way and it uses LVM.

You will find lots of answers to your confusion here: https://linuxhandbook.com/lvm-guide/

To be fair, one does not have to use LVM, but it allows for thin provisioning so unless you go with other options like ZFS on the node, it's not that bad to stick to LVM way of doing things for the start.
 
I do not think he may want to resize his e.g. PLEX VM drive. I would very much create a new volume on that LVM and mount it into the VM. That volume would then hold data only. As it's all thin provisioning from the same space, it keeps things tidy and does not waste any space.

Just be mindful in case you use some backups or replication that you do not omit backing up that separate volume with the actual data (if you do not have other ways of doing it).
 
It is possible, within the GUI, to add extra mountpoints, but you may need to create a named resource pool first to have anything available in the dropdown list. The mountpoint could then, well, mount into the directory structure inside your VM where you want.

Screenshot 2023-11-05 at 23.17.36.png

I do not want to send you on the wrong path as I have all my PVE nodes backed by ZFS now, but to have the storage pool I think one has to start looking in the menus for the "Datacenter" and basically included that LVM group into a named storage pool.
 
... and all remaining space (~800gb) as media storage (for plex, and where qbittorrent downloads to). ...
Definitely create a new volume on that underlying LVM, no need to resize anything. Then mount it into both the 2 VMs.

Play around with the LVM commands to see the structure of what PVE created and report back if you get stuck anywhere. Suggested would be to:
1) Create LVM volume
2) Create filesystem on it (ext4 is fine)
3) Add it as a mountpoint to both VMs

It would help if you have roughly same user structure on both VMs.
 
Definitely create a new volume on that underlying LVM, no need to resize anything. Then mount it into both the 2 VMs.

Play around with the LVM commands to see the structure of what PVE created and report back if you get stuck anywhere. Suggested would be to:
1) Create LVM volume
2) Create filesystem on it (ext4 is fine)
3) Add it as a mountpoint to both VMs

It would help if you have roughly same user structure on both VMs.
This sounds ideal, but I don't undestand how to do this with one physical disk.
I looked at this: https://www.diytechguru.com/2020/12/12/create-lvm-storage-in-proxmox/ but it mentions wiping a disk, obviously I can't do that.
 
This sounds ideal, but I don't undestand how to do this with one physical disk.
I looked at this: https://www.diytechguru.com/2020/12/12/create-lvm-storage-in-proxmox/ but it mentions wiping a disk, obviously I can't do that.
The parts with pvcreate and vgcreate has been basically already done for you during proxmox install. I think you want simple follow me tutorial but you'll have to experiment a bit.

Get into the node shell and see what you have:
lvdisplay

Then with lvcreate (be sure to use --thin option) you can create a new volume in a similar fashion like the existing root ones for your VMs already existing within the same volume group.

If you like to stay in the GUI then check (and maybe post here) what it shows for you now - the equivalent of step 5 and 6 of your quoted guide. I am pretty sure you halfway there, you just want an extra volume. I suspect you could even create a faux VM, allocate it on the same volume group with its logical volume, then remove it but keep the volume, then mount it to the two actual VMs, but why hack it when it can be done cleanly?
 
The thing with terminal is that if you understand the concepts it is much easier to troubleshoot with someone with simple terminal output posts, for instance the lvdisplay output would show the whole thing for us which otherwise entails screenshotting multiple sections of the GUI, which in the end just run these very same commands in the background.
 
The thing with terminal is that if you understand the concepts it is much easier to troubleshoot with someone with simple terminal output posts, for instance the lvdisplay output would show the whole thing for us which otherwise entails screenshotting multiple sections of the GUI, which in the end just run these very same commands in the background.
terminal is fine. here is my lvdisplay output:

Code:
 --- Logical volume ---
  LV Name                data
  VG Name                pve
  LV UUID                bk7byQ-ef2O-A2lI-Uwkf-OVEe-15S7-ukpXBT
  LV Write Access        read/write (activated read only)
  LV Creation host, time proxmox, 2023-10-31 17:25:44 -0700
  LV Pool metadata       data_tmeta
  LV Pool data           data_tdata
  LV Status              available
  # open                 0
  LV Size                <816.21 GiB
  Allocated pool data    1.71%
  Allocated metadata     0.29%
  Current LE             208949
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:5
  
  --- Logical volume ---
  LV Path                /dev/pve/swap
  LV Name                swap
  VG Name                pve
  LV UUID                qvFcYf-FFeH-0UYw-UKdc-HLIP-Oc0L-rULL9n
  LV Write Access        read/write
  LV Creation host, time proxmox, 2023-10-31 17:25:40 -0700
  LV Status              available
  # open                 2
  LV Size                8.00 GiB
  Current LE             2048
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0
  
  --- Logical volume ---
  LV Path                /dev/pve/root
  LV Name                root
  VG Name                pve
  LV UUID                iRdOgs-rL0Y-Xq4o-rtKT-3uBj-sFfK-ke88GD
  LV Write Access        read/write
  LV Creation host, time proxmox, 2023-10-31 17:25:40 -0700
  LV Status              available
  # open                 1
  LV Size                96.00 GiB
  Current LE             24576
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:1
  
  --- Logical volume ---
  LV Path                /dev/pve/vm-100-disk-0
  LV Name                vm-100-disk-0
  VG Name                pve
  LV UUID                ClaSSj-iKAB-b0AW-Vy6F-7ih7-WuTY-iGyg43
  LV Write Access        read/write
  LV Creation host, time server, 2023-10-31 19:28:24 -0700
  LV Pool name           data
  LV Status              available
  # open                 1
  LV Size                4.00 MiB
  Mapped size            0.00%
  Current LE             1
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:6
  
  --- Logical volume ---
  LV Path                /dev/pve/vm-100-disk-1
  LV Name                vm-100-disk-1
  VG Name                pve
  LV UUID                Lu8CZd-bK5w-T08t-g7fc-oCY8-6QWd-ERIfBr
  LV Write Access        read/write
  LV Creation host, time server, 2023-10-31 19:28:25 -0700
  LV Pool name           data
  LV Status              available
  # open                 1
  LV Size                32.00 GiB
  Mapped size            35.44%
  Current LE             8192
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:7
  
  --- Logical volume ---
  LV Path                /dev/pve/vm-101-disk-0
  LV Name                vm-101-disk-0
  VG Name                pve
  LV UUID                iD4sSn-R7vL-EXvj-kQ4c-Z71M-Jvi4-wJbg1s
  LV Write Access        read/write
  LV Creation host, time server, 2023-11-02 23:06:47 -0700
  LV Pool name           data
  LV Status              available
  # open                 1
  LV Size                4.00 GiB
  Mapped size            43.73%
  Current LE             1024
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:8
  
  --- Logical volume ---
  LV Path                /dev/pve/vm-102-disk-0
  LV Name                vm-102-disk-0
  VG Name                pve
  LV UUID                JAWrU2-XWko-JgxS-eOXQ-1Tiw-tsgE-SbRL2K
  LV Write Access        read/write
  LV Creation host, time server, 2023-11-02 23:42:49 -0700
  LV Pool name           data
  LV Status              available
  # open                 1
  LV Size                4.00 GiB
  Mapped size            22.49%
  Current LE             1024
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:9
 
terminal is fine. here is my lvdisplay output:

Code:
 --- Logical volume ---
  LV Name                data
  VG Name                pve
  LV UUID                bk7byQ-ef2O-A2lI-Uwkf-OVEe-15S7-ukpXBT
  LV Write Access        read/write (activated read only)
  LV Creation host, time proxmox, 2023-10-31 17:25:44 -0700
  LV Pool metadata       data_tmeta
  LV Pool data           data_tdata
  LV Status              available
  # open                 0
  LV Size                <816.21 GiB
  Allocated pool data    1.71%
  Allocated metadata     0.29%
  Current LE             208949
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:5
 
  --- Logical volume ---
  LV Path                /dev/pve/swap
  LV Name                swap
  VG Name                pve
  LV UUID                qvFcYf-FFeH-0UYw-UKdc-HLIP-Oc0L-rULL9n
  LV Write Access        read/write
  LV Creation host, time proxmox, 2023-10-31 17:25:40 -0700
  LV Status              available
  # open                 2
  LV Size                8.00 GiB
  Current LE             2048
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0
 
  --- Logical volume ---
  LV Path                /dev/pve/root
  LV Name                root
  VG Name                pve
  LV UUID                iRdOgs-rL0Y-Xq4o-rtKT-3uBj-sFfK-ke88GD
  LV Write Access        read/write
  LV Creation host, time proxmox, 2023-10-31 17:25:40 -0700
  LV Status              available
  # open                 1
  LV Size                96.00 GiB
  Current LE             24576
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:1
 
  --- Logical volume ---
  LV Path                /dev/pve/vm-100-disk-0
  LV Name                vm-100-disk-0
  VG Name                pve
  LV UUID                ClaSSj-iKAB-b0AW-Vy6F-7ih7-WuTY-iGyg43
  LV Write Access        read/write
  LV Creation host, time server, 2023-10-31 19:28:24 -0700
  LV Pool name           data
  LV Status              available
  # open                 1
  LV Size                4.00 MiB
  Mapped size            0.00%
  Current LE             1
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:6
 
  --- Logical volume ---
  LV Path                /dev/pve/vm-100-disk-1
  LV Name                vm-100-disk-1
  VG Name                pve
  LV UUID                Lu8CZd-bK5w-T08t-g7fc-oCY8-6QWd-ERIfBr
  LV Write Access        read/write
  LV Creation host, time server, 2023-10-31 19:28:25 -0700
  LV Pool name           data
  LV Status              available
  # open                 1
  LV Size                32.00 GiB
  Mapped size            35.44%
  Current LE             8192
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:7
 
  --- Logical volume ---
  LV Path                /dev/pve/vm-101-disk-0
  LV Name                vm-101-disk-0
  VG Name                pve
  LV UUID                iD4sSn-R7vL-EXvj-kQ4c-Z71M-Jvi4-wJbg1s
  LV Write Access        read/write
  LV Creation host, time server, 2023-11-02 23:06:47 -0700
  LV Pool name           data
  LV Status              available
  # open                 1
  LV Size                4.00 GiB
  Mapped size            43.73%
  Current LE             1024
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:8
 
  --- Logical volume ---
  LV Path                /dev/pve/vm-102-disk-0
  LV Name                vm-102-disk-0
  VG Name                pve
  LV UUID                JAWrU2-XWko-JgxS-eOXQ-1Tiw-tsgE-SbRL2K
  LV Write Access        read/write
  LV Creation host, time server, 2023-11-02 23:42:49 -0700
  LV Pool name           data
  LV Status              available
  # open                 1
  LV Size                4.00 GiB
  Mapped size            22.49%
  Current LE             1024
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:9
So you all standard install, your VG (volume group) name is "pve", all your VMs' LVs (logical volumes) are then created within and mapped into /dev/pve/${lvname}, the name then gets format of vm-${ID}-disk-${No}, but we do not care about that part.

Note that Proxmox created a LV called data within the VG pve, but it's thin provisioned.

What you want is have an extra LV within the same VG, but now I realised that Proxmox already gives you the thin pool data - the very first item, so you might want to use that. I don't have such setup at hand, but the following is non-destructive, creating a LV:
lvcreate -V 300G --thin -n media_volume pve/data

If this does not work for you I will have to spin up a VM with proxmox and LVM myself. :D Or you have to read out more on how it uses the LVM, but some of it might be outdated:
https://pve.proxmox.com/wiki/Logical_Volume_Manager_(LVM)
https://pve.proxmox.com/wiki/Storage:_LVM_Thin

Once you have that volume, you should be able to create a filesystem on it, simply:
mkfs.ext4 /dev/pve/media_volume
BE CAREFUL WHERE YOU POINT THIS COMMAND TO BEFORE EXECUTION!

Then it should be mountable in your VMs.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!