Cannot mount thinpool lv from fstab

barchetta

Member
Apr 3, 2022
15
0
6
I am running Proxmox VE 6.3-2

I wanted to efficiently use space on my SSD drive and I read rather than VM's taking up static space I could do that with a thin pool.

I created a thin pool and then a lv in it (I think).. I can mount it manually but cant seem to mount it from fstab. All I know is I wanted to dynamically allocate space on VM drives so I did this...
Here is the lv:
--- Logical volume --- LV Name 500ssdthpl VG Name 500ssd LV UUID gkXKmv-gXgb-9lOq-c3Jk-7dHW-Fg2G-rSnB2I LV Write Access read/write LV Creation host, time host1, 2021-09-26 11:58:19 -0400 LV Pool metadata 500ssdthpl_tmeta LV Pool data 500ssdthpl_tdata LV Status available # open 1 LV Size 445.00 GiB Allocated pool data 0.00% Allocated metadata 10.42% Current LE 113920 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:8 root@host1:~# lsblk -f NAME FSTYPE LABEL UUID FSAVAIL FSUSE% MOUNTPOINT sda ├─sda1 ├─sda2 vfat 4CED-D9CE └─sda3 LVM2_member cGq679-CM8Y-RrsZ-2DmT-Bpkt-Q1OQ-H3Gr1Y ├─pve-swap swap 5706a06f-74c2-49b5-a847-8584c3bdadf8 [SWAP] ├─pve-root ext4 27e4ca33-54f2-4a05-af8f-98044b75e650 16.4G 38% / ├─pve-data_tmeta │ └─pve-data └─pve-data_tdata └─pve-data sdb └─sdb1 LVM2_member 4kdD1x-x3ed-GyXr-PENf-Ob50-VB4y-KCQJFG ├─500ssd-500ssdthpl_tmeta │ └─500ssd-500ssdthpl 319.6G 22% /500ssdthpl └─500ssd-500ssdthpl_tdata └─500ssd-500ssdthpl 319.6G 22% /500ssdthpl sdc └─sdc1 LVM2_member OoXD8I-hDoo-Fnxm-zmEg-uvRD-TKbt-5SdzvN ├─4gspin-vm--301--disk--0 ├─4gspin-vm--105--disk--0 ├─4gspin-vm--103--disk--0 ├─4gspin-4gthpl_tmeta │ └─4gspin-4gthpl 261.1G 42% /4gthpl └─4gspin-4gthpl_tdata └─4gspin-4gthpl 261.1G 42% /4gthpl sr0 lvm> lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert 4gthpl 4gspin twi-aotz-- 500.00g 0.00 10.41 vm-103-disk-0 4gspin -wi-a----- 32.00g vm-105-disk-0 4gspin -wi-a----- 32.00g vm-301-disk-0 4gspin -wi-ao---- 512.00g 500ssdthpl 500ssd twi-aotz-- 445.00g 0.00 10.42 data pve twi-a-tz-- <64.49g 0.00 1.60 root pve -wi-ao---- 29.50g swap pve -wi-ao---- 8.00g lvm> vgs VG #PV #LV #SN Attr VSize VFree 4gspin 1 4 0 wz--n- <3.64t <2.59t 500ssd 1 1 0 wz--n- <447.13g 1.91g pve 1 3 0 wz--n- <118.74g 14.75g



If I manually mount with:
mount /dev/500ssd/500ssdthpl /500ssdthpl
it mounts fine but if I place it in fstab it wont mount. Do I have to use a UUID or something?
It seems it has to do with the file system?

In my notes it looks like I ran these commands to create it.

CREATE LOGICAL VOLUME root@host1:/# lvcreate -L 445G --thinpool 500ssdthpl 500ssd Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. Logical volume "500ssdthpl" created. CREATE FILE SYSTEM /dev/500ssd# mkfs.ext4 /dev/500ssd/500ssdthpl mke2fs 1.44.5 (15-Dec-2018) Discarding device blocks: done Creating filesystem with 116654080 4k blocks and 29163520 inodes Filesystem UUID: 594b0d0f-4dee-4dbf-acf6-d551d6cb63da Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 102400000 Allocating group tables: done Writing inode tables: done Creating journal (262144 blocks): done Writing superblocks and filesystem accounting information: done


My last attempt was I added this to fstab
mount /dev/500ssd/500ssdthpl /500ssdthpl ext4

Here is what verify said:
root@host1:~# findmnt --verify
/dev/500ssd/500ssdthpl
[W] non-canonical target path (real: /dev/mapper/500ssd-500ssdthpl)
[E] target is not a directory
[W] unreachable source: mount: No such file or directory
[W] /500ssdthpl seems unsupported by the current kernel
[W] cannot detect on-disk filesystem type


Last time I restarted the host it went into emergency mode and I had to remove the bad mount command and then after it restarted I manually mounted and all was well. But now if I restart I have to remember how to mount etc.


Any help would be appreciated. Please excuse my ignorance. I have spent hours trying to understand how to add to fstab and just cannot figure it out.

EDIT: You may surmise by looking at my config I have the same problem mounting "4gthpl" as well. I just thought easier if I use "500ssdthpl" as my example. Just thought I should clarify.

Edit again: I think this is what I tried when I restarted the host and it didnt work. This was before I knew about the verify command.
/dev/500ssd/500ssdthpl ext4 defaults,errors=remount-ro 0 2

My fstab currently looks like this;
# <file system> <mount point> <type> <options> <dump> <pass>
/dev/pve/root / ext4 errors=remount-ro 0 1
/dev/pve/swap none swap sw 0 0
proc /proc proc defaults 0 0
 
Last edited:
Hi,
I am running Proxmox VE 6.3-2
not related to your problem, but I'd recommend upgrading to latest 6.4.

I wanted to efficiently use space on my SSD drive and I read rather than VM's taking up static space I could do that with a thin pool.

I created a thin pool and then a lv in it (I think).. I can mount it manually but cant seem to mount it from fstab. All I know is I wanted to dynamically allocate space on VM drives so I did this...
Here is the lv:
--- Logical volume --- LV Name 500ssdthpl VG Name 500ssd LV UUID gkXKmv-gXgb-9lOq-c3Jk-7dHW-Fg2G-rSnB2I LV Write Access read/write LV Creation host, time host1, 2021-09-26 11:58:19 -0400 LV Pool metadata 500ssdthpl_tmeta LV Pool data 500ssdthpl_tdata LV Status available # open 1 LV Size 445.00 GiB Allocated pool data 0.00% Allocated metadata 10.42% Current LE 113920 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:8 root@host1:~# lsblk -f NAME FSTYPE LABEL UUID FSAVAIL FSUSE% MOUNTPOINT sda ├─sda1 ├─sda2 vfat 4CED-D9CE └─sda3 LVM2_member cGq679-CM8Y-RrsZ-2DmT-Bpkt-Q1OQ-H3Gr1Y ├─pve-swap swap 5706a06f-74c2-49b5-a847-8584c3bdadf8 [SWAP] ├─pve-root ext4 27e4ca33-54f2-4a05-af8f-98044b75e650 16.4G 38% / ├─pve-data_tmeta │ └─pve-data └─pve-data_tdata └─pve-data sdb └─sdb1 LVM2_member 4kdD1x-x3ed-GyXr-PENf-Ob50-VB4y-KCQJFG ├─500ssd-500ssdthpl_tmeta │ └─500ssd-500ssdthpl 319.6G 22% /500ssdthpl └─500ssd-500ssdthpl_tdata └─500ssd-500ssdthpl 319.6G 22% /500ssdthpl sdc └─sdc1 LVM2_member OoXD8I-hDoo-Fnxm-zmEg-uvRD-TKbt-5SdzvN ├─4gspin-vm--301--disk--0 ├─4gspin-vm--105--disk--0 ├─4gspin-vm--103--disk--0 ├─4gspin-4gthpl_tmeta │ └─4gspin-4gthpl 261.1G 42% /4gthpl └─4gspin-4gthpl_tdata └─4gspin-4gthpl 261.1G 42% /4gthpl sr0 lvm> lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert 4gthpl 4gspin twi-aotz-- 500.00g 0.00 10.41 vm-103-disk-0 4gspin -wi-a----- 32.00g vm-105-disk-0 4gspin -wi-a----- 32.00g vm-301-disk-0 4gspin -wi-ao---- 512.00g 500ssdthpl 500ssd twi-aotz-- 445.00g 0.00 10.42 data pve twi-a-tz-- <64.49g 0.00 1.60 root pve -wi-ao---- 29.50g swap pve -wi-ao---- 8.00g lvm> vgs VG #PV #LV #SN Attr VSize VFree 4gspin 1 4 0 wz--n- <3.64t <2.59t 500ssd 1 1 0 wz--n- <447.13g 1.91g pve 1 3 0 wz--n- <118.74g 14.75g



If I manually mount with:
mount /dev/500ssd/500ssdthpl /500ssdthpl
it mounts fine but if I place it in fstab it wont mount. Do I have to use a UUID or something?
It seems it has to do with the file system?
I'm surprised the mount command worked at all. The thin pool does not contain any file system, it cannot be mounted as such! You could create a logical volume in the thin pool, format it with a file system and then mount that, but if you are planning to use the thin pool for VM images you don't need to do that. Just add the thin pool as a storage to your Proxmox VE (Datacenter > Storage > Add > LVM Thin in the UI) and you can start using it there. No need to mount the pool, it just has to be activated (which should happen automatically at boot).

In my notes it looks like I ran these commands to create it.

CREATE LOGICAL VOLUME root@host1:/# lvcreate -L 445G --thinpool 500ssdthpl 500ssd Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. Logical volume "500ssdthpl" created. CREATE FILE SYSTEM /dev/500ssd# mkfs.ext4 /dev/500ssd/500ssdthpl mke2fs 1.44.5 (15-Dec-2018) Discarding device blocks: done Creating filesystem with 116654080 4k blocks and 29163520 inodes Filesystem UUID: 594b0d0f-4dee-4dbf-acf6-d551d6cb63da Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 102400000 Allocating group tables: done Writing inode tables: done Creating journal (262144 blocks): done Writing superblocks and filesystem accounting information: done


My last attempt was I added this to fstab
mount /dev/500ssd/500ssdthpl /500ssdthpl ext4

Here is what verify said:
root@host1:~# findmnt --verify
/dev/500ssd/500ssdthpl
[W] non-canonical target path (real: /dev/mapper/500ssd-500ssdthpl)
[E] target is not a directory
[W] unreachable source: mount: No such file or directory
[W] /500ssdthpl seems unsupported by the current kernel
[W] cannot detect on-disk filesystem type


Last time I restarted the host it went into emergency mode and I had to remove the bad mount command and then after it restarted I manually mounted and all was well. But now if I restart I have to remember how to mount etc.


Any help would be appreciated. Please excuse my ignorance. I have spent hours trying to understand how to add to fstab and just cannot figure it out.

EDIT: You may surmise by looking at my config I have the same problem mounting "4gthpl" as well. I just thought easier if I use "500ssdthpl" as my example. Just thought I should clarify.

Edit again: I think this is what I tried when I restarted the host and it didnt work. This was before I knew about the verify command.
/dev/500ssd/500ssdthpl ext4 defaults,errors=remount-ro 0 2

My fstab currently looks like this;
# <file system> <mount point> <type> <options> <dump> <pass>
/dev/pve/root / ext4 errors=remount-ro 0 1
/dev/pve/swap none swap sw 0 0
proc /proc proc defaults 0 0
 
Hi,

not related to your problem, but I'd recommend upgrading to latest 6.4.


I'm surprised the mount command worked at all. The thin pool does not contain any file system, it cannot be mounted as such! You could create a logical volume in the thin pool, format it with a file system and then mount that, but if you are planning to use the thin pool for VM images you don't need to do that. Just add the thin pool as a storage to your Proxmox VE (Datacenter > Storage > Add > LVM Thin in the UI) and you can start using it there. No need to mount the pool, it just has to be activated (which should happen automatically at boot).
Isnt this where the file system was created?

CREATE FILE SYSTEM /dev/500ssd# mkfs.ext4 /dev/500ssd/500ssdthpl mke2fs 1.44.5 (15-Dec-2018) Discarding device blocks: done Creating filesystem with 116654080 4k blocks and 29163520 inodes Filesystem UUID: 594b0d0f-4dee-4dbf-acf6-d551d6cb63da Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 102400000 Allocating group tables: done Writing inode tables: done Creating journal (262144 blocks): done Writing superblocks and filesystem accounting information: done

It looks like I added it as a directory.. maybe this was my mistake?

So even though I did create a file system is there no way to mount it from fstab?

1649077337296.png
 
Isnt this where the file system was created?

CREATE FILE SYSTEM /dev/500ssd# mkfs.ext4 /dev/500ssd/500ssdthpl mke2fs 1.44.5 (15-Dec-2018) Discarding device blocks: done Creating filesystem with 116654080 4k blocks and 29163520 inodes Filesystem UUID: 594b0d0f-4dee-4dbf-acf6-d551d6cb63da Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 102400000 Allocating group tables: done Writing inode tables: done Creating journal (262144 blocks): done Writing superblocks and filesystem accounting information: done

It looks like I added it as a directory.. maybe this was my mistake?

So even though I did create a file system is there no way to mount it from fstab?

View attachment 35730
This probably destroyed the thin pool, but you might run into further issues because LVM will still think it owns the underlying device. I'd suggest to re-create the thin pool from scratch and add it as a storage. No need to create a file system on top of it, you can add the thin pool directly as a Proxmox VE storage to start using it.
 
This probably destroyed the thin pool, but you might run into further issues because LVM will still think it owns the underlying device. I'd suggest to re-create the thin pool from scratch and add it as a storage. No need to create a file system on top of it, you can add the thin pool directly as a Proxmox VE storage to start using it.
It didnt destroy it.. its still there.. I created a logical volume within it like you suggested and then formatted it. But if it is going to cause issues I will do as you recommend. I'll have to move everything off of that SSD I guess and then start from scratch. I wont be able to do this with my 4tb mechanical as I have no where to go with the data. I'll look up the steps to do as you suggest. thanks.

PS. I think what made all the listing I provided confusing is my terrible naming convention.

EDIT: I didnt format the lv I created a file system.. just clarifying.
 
Last edited:
It didnt destroy it.. its still there.. I created a logical volume within it like you suggested and then formatted it. But if it is going to cause issues I will do as you recommend. I'll have to move everything off of that SSD I guess and then start from scratch. I wont be able to do this with my 4tb mechanical as I have no where to go with the data. I'll look up the steps to do as you suggest. thanks.

PS. I think what made all the listing I provided confusing is my terrible naming convention.

EDIT: I didnt format the lv I created a file system.. just clarifying.

Before I wipe my 500ssd if anyone has any idea how I can get this to mount with fstab it will save me hours of work. It DOES mount manually so I dont know why I cant get it to mount from fstab. Hell, Im typing from a vm inside it now!