[SOLVED] Failed to run lxc.hook.pre-start

Not exactly sure what my issue was.

I had a container that would start. Then I updated Debian inside and afterwards it wouldn't start anymore. But it was not the Debian 13.1 issue mentioned above because I checked and it still said Debian 13.0.

Anyway, updating PVE (including pvecontainer from 6.0.9 to 6.0.11) helped and the container is now starting again. No host reboot necessary.
 
Hi,
Same issue here but I already have binutils installed.......any other ideas?
please post the pct start ID --debug log (replacing ID with the actual ID) and the output of pveversion -v as well as the container configuration pct config ID.
 
run_buffer: 571 Script exited with status 1
lxc_init: 845 Failed to run lxc.hook.pre-start for container "101"
__lxc_start: 2034 Failed to initialize container "101"
d 0 hostid 100000 range 65536
INFO lsm - ../src/lxc/lsm/lsm.c:lsm_init_static:38 - Initialized LSM security driver AppArmor
INFO utils - ../src/lxc/utils.c:run_script_argv:587 - Executing script "/usr/share/lxc/hooks/lxc-pve-prestart-hook" for container "101", config section "lxc"
DEBUG utils - ../src/lxc/utils.c:run_buffer:560 - Script exec /usr/share/lxc/hooks/lxc-pve-prestart-hook 101 lxc pre-start produced output: unable to open file '/fastboot.tmp.106455' - Disk quota exceeded

DEBUG utils - ../src/lxc/utils.c:run_buffer:560 - Script exec /usr/share/lxc/hooks/lxc-pve-prestart-hook 101 lxc pre-start produced output: error in setup task PVE::LXC::Setup::pre_start_hook

ERROR utils - ../src/lxc/utils.c:run_buffer:571 - Script exited with status 1
ERROR start - ../src/lxc/start.c:lxc_init:845 - Failed to run lxc.hook.pre-start for container "101"
ERROR start - ../src/lxc/start.c:__lxc_start:2034 - Failed to initialize container "101"
INFO utils - ../src/lxc/utils.c:run_script_argv:587 - Executing script "/usr/share/lxcfs/lxc.reboot.hook" for container "101", config section "lxc"
startup for container '101' failed

Just noticed the DISK QUOTA EXCEEDED......betting that might be the issue
 
root@proxmox:~# pveversion -v
proxmox-ve: 8.4.0 (running kernel: 6.8.12-15-pve)
pve-manager: 8.4.14 (running version: 8.4.14/b502d23c55afcba1)
proxmox-kernel-helper: 8.1.4
proxmox-kernel-6.8: 6.8.12-15
proxmox-kernel-6.8.12-15-pve-signed: 6.8.12-15
proxmox-kernel-6.8.12-13-pve-signed: 6.8.12-13
proxmox-kernel-6.8.12-9-pve-signed: 6.8.12-9
ceph-fuse: 17.2.8-pve2
corosync: 3.1.9-pve1
criu: 3.17.1-2+deb12u2
frr-pythontools: 10.2.3-1+pve1
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx11
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-5
libknet1: 1.30-pve2
libproxmox-acme-perl: 1.6.0
libproxmox-backup-qemu0: 1.5.2
libproxmox-rs-perl: 0.3.5
libpve-access-control: 8.2.2
libpve-apiclient-perl: 3.3.2
libpve-cluster-api-perl: 8.1.2
libpve-cluster-perl: 8.1.2
libpve-common-perl: 8.3.4
libpve-guest-common-perl: 5.2.2
libpve-http-server-perl: 5.2.2
libpve-network-perl: 0.11.2
libpve-rs-perl: 0.9.4
libpve-storage-perl: 8.3.7
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 6.0.0-1
lxcfs: 6.0.0-pve2
novnc-pve: 1.6.0-2
proxmox-backup-client: 3.4.7-1
proxmox-backup-file-restore: 3.4.7-1
proxmox-backup-restore-image: 0.7.0
proxmox-firewall: 0.7.1
proxmox-kernel-helper: 8.1.4
proxmox-mail-forward: 0.3.3
proxmox-mini-journalreader: 1.5
proxmox-offline-mirror-helper: 0.6.8
proxmox-widget-toolkit: 4.3.13
pve-cluster: 8.1.2
pve-container: 5.3.3
pve-docs: 8.4.1
pve-edk2-firmware: 4.2025.02-4~bpo12+1
pve-esxi-import-tools: 0.7.4
pve-firewall: 5.1.2
pve-firmware: 3.16-3
pve-ha-manager: 4.0.7
pve-i18n: 3.4.5
pve-qemu-kvm: 9.2.0-7
pve-xtermjs: 5.5.0-2
qemu-server: 8.4.4
smartmontools: 7.3-pve1
spiceterm: 3.3.1
swtpm: 0.8.0+pve1
vncterm: 1.8.1
zfsutils-linux: 2.2.8-pve1
root@proxmox:~#
 
root@proxmox:~# pct config 101
arch: amd64
cores: 2
description: <div align='center'>%0A <a href='https%3A//Helper-Scripts.com' target='_blank' rel='noopener noreferrer'>%0A <img src='https%3A//raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/images/logo-81x112.png' alt='Logo' style='width%3A81px;height%3A112px;'/>%0A </a>%0A%0A <h2 style='font-size%3A 24px; margin%3A 20px 0;'>Deluge LXC</h2>%0A%0A <p style='margin%3A 16px 0;'>%0A <a href='https%3A//ko-fi.com/community_scripts' target='_blank' rel='noopener noreferrer'>%0A <img src='https%3A//img.shields.io/badge/&#x2615;-Buy us a coffee-blue' alt='spend Coffee' />%0A </a>%0A </p>%0A%0A <span style='margin%3A 0 10px;'>%0A <i class="fa fa-github fa-fw" style="color%3A #f5f5f5;"></i>%0A <a href='https%3A//github.com/community-scripts/ProxmoxVE' target='_blank' rel='noopener noreferrer' style='text-decoration%3A none; color%3A #00617f;'>GitHub</a>%0A </span>%0A <span style='margin%3A 0 10px;'>%0A <i class="fa fa-comments fa-fw" style="color%3A #f5f5f5;"></i>%0A <a href='https%3A//github.com/community-scripts/ProxmoxVE/discussions' target='_blank' rel='noopener noreferrer' style='text-decoration%3A none; color%3A #00617f;'>Discussions</a>%0A </span>%0A <span style='margin%3A 0 10px;'>%0A <i class="fa fa-exclamation-circle fa-fw" style="color%3A #f5f5f5;"></i>%0A <a href='https%3A//github.com/community-scripts/ProxmoxVE/issues' target='_blank' rel='noopener noreferrer' style='text-decoration%3A none; color%3A #00617f;'>Issues</a>%0A </span>%0A</div>%0A%0APort 8112%0A
features: keyctl=1,nesting=1
hostname: deluge
memory: 2048
mp1: /mnt/media,mp=/mnt/downloads
net0: name=eth0,bridge=vmbr1,hwaddr=BC:24:11:51:0B:8C,ip=dhcp,type=veth
onboot: 1
ostype: debian
rootfs: One_TEE:subvol-101-disk-0,size=4G
swap: 512
tags: community-script;torrent
unprivileged: 1
unused0: store:subvol-101-disk-0
 
Just noticed the DISK QUOTA EXCEEDED......betting that might be the issue
Yes, you might want to resize your rootfs: One_TEE:subvol-101-disk-0,size=4G ZFS dataset or remove data you don't need from it.
 
root@rpgsrv:~# pct start 101 -debug
run_buffer: 571 Script exited with status 32
lxc_init: 845 Failed to run lxc.hook.pre-start for container "101"
__lxc_start: 2047 Failed to initialize container "101"
0 hostid 100000 range 65536
INFO lsm - ../src/lxc/lsm/lsm.c:lsm_init_static:38 - Initialized LSM security driver AppArmor
INFO utils - ../src/lxc/utils.c:run_script_argv:587 - Executing script "/usr/share/lxc/hooks/lxc-pve-prestart-hook" for container "101", config section "lxc"
DEBUG utils - ../src/lxc/utils.c:run_buffer:560 - Script exec /usr/share/lxc/hooks/lxc-pve-prestart-hook 101 lxc pre-start produced output: mount: /var/lib/lxc/.pve-staged-mounts/rootfs: /dev/mapper/data_SSD512-vm--101--disk--0 already mounted or mount point busy.
dmesg(1) may have more information after failed mount system call.

DEBUG utils - ../src/lxc/utils.c:run_buffer:560 - Script exec /usr/share/lxc/hooks/lxc-pve-prestart-hook 101 lxc pre-start produced output: command 'mount /dev/dm-10 /var/lib/lxc/.pve-staged-mounts/rootfs' failed: exit code 32

ERROR utils - ../src/lxc/utils.c:run_buffer:571 - Script exited with status 32
ERROR start - ../src/lxc/start.c:lxc_init:845 - Failed to run lxc.hook.pre-start for container "101"
ERROR start - ../src/lxc/start.c:__lxc_start:2047 - Failed to initialize container "101"
INFO utils - ../src/lxc/utils.c:run_script_argv:587 - Executing script "/usr/share/lxcfs/lxc.reboot.hook" for container "101", config section "lxc"
startup for container '101' failed
 
root@rpgsrv:~# pveversion -v
proxmox-ve: 9.1.0 (running kernel: 6.17.2-2-pve)
pve-manager: 9.1.1 (running version: 9.1.1/42db4a6cf33dac83)
proxmox-kernel-helper: 9.0.4
proxmox-kernel-6.17.2-2-pve-signed: 6.17.2-2
proxmox-kernel-6.17: 6.17.2-2
proxmox-kernel-6.14.11-4-pve-signed: 6.14.11-4
proxmox-kernel-6.14: 6.14.11-4
proxmox-kernel-6.14.8-2-pve-signed: 6.14.8-2
ceph-fuse: 19.2.3-pve1
corosync: 3.1.9-pve2
criu: 4.1.1-1
frr-pythontools: 10.3.1-1+pve4
ifupdown2: 3.3.0-1+pmx11
intel-microcode: 3.20250812.1~deb13u1
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-5
libproxmox-acme-perl: 1.7.0
libproxmox-backup-qemu0: 2.0.1
libproxmox-rs-perl: 0.4.1
libpve-access-control: 9.0.4
libpve-apiclient-perl: 3.4.2
libpve-cluster-api-perl: 9.0.7
libpve-cluster-perl: 9.0.7
libpve-common-perl: 9.0.15
libpve-guest-common-perl: 6.0.2
libpve-http-server-perl: 6.0.5
libpve-network-perl: 1.2.3
libpve-rs-perl: 0.11.3
libpve-storage-perl: 9.1.0
libspice-server1: 0.15.2-1+b1
lvm2: 2.03.31-2+pmx1
lxc-pve: 6.0.5-3
lxcfs: 6.0.4-pve1
novnc-pve: 1.6.0-3
proxmox-backup-client: 4.1.0-1
proxmox-backup-file-restore: 4.1.0-1
proxmox-backup-restore-image: 1.0.0
proxmox-firewall: 1.2.1
proxmox-kernel-helper: 9.0.4
proxmox-mail-forward: 1.0.2
proxmox-mini-journalreader: 1.6
proxmox-offline-mirror-helper: 0.7.3
proxmox-widget-toolkit: 5.1.2
pve-cluster: 9.0.7
pve-container: 6.0.18
pve-docs: 9.1.1
pve-edk2-firmware: 4.2025.05-2
pve-esxi-import-tools: 1.0.1
pve-firewall: 6.0.4
pve-firmware: 3.17-2
pve-ha-manager: 5.0.8
pve-i18n: 3.6.4
pve-qemu-kvm: 10.1.2-4
pve-xtermjs: 5.5.0-3
qemu-server: 9.1.0
smartmontools: 7.4-pve1
spiceterm: 3.4.1
swtpm: 0.8.0+pve3
vncterm: 1.9.1
zfsutils-linux: 2.3.4-pve1
 
root@rpgsrv:~# pct config 101
arch: amd64
cores: 8
features: nesting=1
hostname: wing
memory: 32768
net0: name=eth0,bridge=vmbr0,firewall=1,hwaddr=BC:24:11:F5:C1:E8,ip=dhcp,type=veth
onboot: 0
ostype: debian
rootfs: local-lvm-SSD512:vm-101-disk-0,size=128G
swap: 32768
unprivileged: 1
 
I also noticed that now it is not possible to create lxc machines on this disk, although everything used to work.
 
does the problem go away if you reboot the node?
 
could you do the following?

run "journalctl -f" in one shell, and then run "pct start 101 -debug" in another and paste the journal that is generated?

"lvs", "findmnt" and "pvesm status" would also be interesting
 
Please edit that to use code tags. I wouldn't consider this readable.
could you do the following?

run "journalctl -f" in one shell, and then run "pct start 101 -debug" in another and paste the journal that is generated?

"lvs", "findmnt" and "pvesm status" would also be interesting

Bash:
root@rpgsrv:~# pct start 101 -debug
run_buffer: 571 Script exited with status 32
lxc_init: 845 Failed to run lxc.hook.pre-start for container "101"
__lxc_start: 2047 Failed to initialize container "101"
0 hostid 100000 range 65536
INFO     lsm - ../src/lxc/lsm/lsm.c:lsm_init_static:38 - Initialized LSM security driver AppArmor
INFO     utils - ../src/lxc/utils.c:run_script_argv:587 - Executing script "/usr/share/lxc/hooks/lxc-pve-prestart-hook" for container "101", config section "lxc"
DEBUG    utils - ../src/lxc/utils.c:run_buffer:560 - Script exec /usr/share/lxc/hooks/lxc-pve-prestart-hook 101 lxc pre-start produced output: mount: /var/lib/lxc/.pve-staged-mounts/rootfs: /dev/mapper/data_SSD512-vm--101--disk--0 already mounted or mount point busy.
dmesg(1) may have more information after failed mount system call.

DEBUG    utils - ../src/lxc/utils.c:run_buffer:560 - Script exec /usr/share/lxc/hooks/lxc-pve-prestart-hook 101 lxc pre-start produced output: command 'mount /dev/dm-10 /var/lib/lxc/.pve-staged-mounts/rootfs' failed: exit code 32

ERROR    utils - ../src/lxc/utils.c:run_buffer:571 - Script exited with status 32
ERROR    start - ../src/lxc/start.c:lxc_init:845 - Failed to run lxc.hook.pre-start for container "101"
ERROR    start - ../src/lxc/start.c:__lxc_start:2047 - Failed to initialize container "101"
INFO     utils - ../src/lxc/utils.c:run_script_argv:587 - Executing script "/usr/share/lxcfs/lxc.reboot.hook" for container "101", config section "lxc"
startup for container '101' failed



root@rpgsrv:~# journalctl -f
Nov 28 17:56:33 rpgsrv systemd[1]: Started user@1000.service - User Manager for UID 1000.
Nov 28 17:56:33 rpgsrv systemd[1]: Started session-27.scope - Session 27 of User rpgadmin.
Nov 28 17:56:41 rpgsrv su[190200]: (to root) rpgadmin on pts/0
Nov 28 17:56:41 rpgsrv su[190200]: pam_unix(su-l:session): session opened for user root(uid=0) by rpgadmin(uid=1000)
Nov 28 17:57:01 rpgsrv sshd-session[190263]: Accepted password for rpgadmin from 192.168.0.188 port 56880 ssh2
Nov 28 17:57:01 rpgsrv sshd-session[190263]: pam_unix(sshd:session): session opened for user rpgadmin(uid=1000) by rpgadmin(uid=0)
Nov 28 17:57:01 rpgsrv systemd-logind[995]: New session 29 of user rpgadmin.
Nov 28 17:57:01 rpgsrv systemd[1]: Started session-29.scope - Session 29 of User rpgadmin.
Nov 28 17:57:07 rpgsrv su[190297]: (to root) rpgadmin on pts/1
Nov 28 17:57:07 rpgsrv su[190297]: pam_unix(su-l:session): session opened for user root(uid=0) by rpgadmin(uid=1000)
Nov 28 17:57:37 rpgsrv pct[190382]: <root@pam> starting task UPID:rpgsrv:0002E7B3:006D2FB6:69299C41:vzstart:101:root@pam:
Nov 28 17:57:37 rpgsrv pct[190387]: starting CT 101: UPID:rpgsrv:0002E7B3:006D2FB6:69299C41:vzstart:101:root@pam:
Nov 28 17:57:37 rpgsrv systemd[1]: Created slice system-pve\x2dcontainer\x2ddebug.slice - Slice /system/pve-container-debug.
Nov 28 17:57:37 rpgsrv systemd[1]: Started pve-container-debug@101.service - PVE LXC Container: 101.
Nov 28 17:57:38 rpgsrv kernel: audit: type=1400 audit(1764334658.510:131): apparmor="DENIED" operation="getattr" class="posix_mqueue" profile="/usr/bin/lxc-start" name="/" pid=190396 comm="vgs" requested="getattr" denied="getattr" class="posix_mqueue" fsuid=0 ouid=0 olabel="unconfined"
Nov 28 17:57:38 rpgsrv kernel: audit: type=1400 audit(1764334658.563:132): apparmor="DENIED" operation="getattr" class="posix_mqueue" profile="/usr/bin/lxc-start" name="/" pid=190397 comm="lvs" requested="getattr" denied="getattr" class="posix_mqueue" fsuid=0 ouid=0 olabel="unconfined"
Nov 28 17:57:38 rpgsrv kernel: EXT4-fs warning (device dm-10): ext4_multi_mount_protect:328: MMP interval 42 higher than expected, please wait.
Nov 28 17:58:21 rpgsrv kernel: EXT4-fs warning (device dm-10): ext4_multi_mount_protect:377: Device is already active on another node.
Nov 28 17:58:21 rpgsrv kernel: EXT4-fs warning (device dm-10): ext4_multi_mount_protect:377: MMP failure info: last update time: 1763929640, last update node: rpgsrv, last update device: dm-10
Nov 28 17:58:21 rpgsrv pvestatd[1360]: unable to get PID for CT 101 (not running?)
Nov 28 17:58:21 rpgsrv pct[190387]: startup for container '101' failed
Nov 28 17:58:21 rpgsrv pct[190382]: <root@pam> end task UPID:rpgsrv:0002E7B3:006D2FB6:69299C41:vzstart:101:root@pam: startup for container '101' failed
Nov 28 17:58:21 rpgsrv pvestatd[1360]: status update time (35.689 seconds)
Nov 28 17:58:21 rpgsrv pmxcfs[1187]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve-storage-9.0/rpgsrv/local-lvm-SSD256: -1
Nov 28 17:58:21 rpgsrv pmxcfs[1187]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve-storage-9.0/rpgsrv/local-lvm-SSD512: -1
Nov 28 17:58:21 rpgsrv pmxcfs[1187]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve-storage-9.0/rpgsrv/local: -1
Nov 28 17:58:22 rpgsrv systemd[1]: pve-container-debug@101.service: Main process exited, code=exited, status=1/FAILURE
Nov 28 17:58:22 rpgsrv systemd[1]: pve-container-debug@101.service: Failed with result 'exit-code'.
Nov 28 17:58:22 rpgsrv systemd[1]: pve-container-debug@101.service: Consumed 1.312s CPU time, 88.9M memory peak.



root@rpgsrv:~# lvs
  LV            VG          Attr       LSize   Pool      Origin Data%  Meta%  Move Log Cpy%Sync Convert
  thin_pool     data_SSD512 twi-aotz-- 476.00g                  5.01   11.94
  vm-101-disk-0 data_SSD512 Vwi-a-tz-- 128.00g thin_pool        18.62
  data          pve         twi-aotz-- 140.87g                  17.61  1.69
  root          pve         -wi-ao---- <69.25g
  swap          pve         -wi-ao----   8.00g
  vm-100-disk-0 pve         Vwi---tz--   8.00g data
  vm-102-disk-0 pve         Vwi---tz--  50.00g data
 
 
 
  root@rpgsrv:~# findmnt
TARGET                                        SOURCE               FSTYPE      OPTIONS
/                                             /dev/mapper/pve-root ext4        rw,relatime,errors=remount-ro
├─/sys                                        sysfs                sysfs       rw,nosuid,nodev,noexec,relatime
│ ├─/sys/kernel/security                      securityfs           securityfs  rw,nosuid,nodev,noexec,relatime
│ ├─/sys/fs/cgroup                            cgroup2              cgroup2     rw,nosuid,nodev,noexec,relatime
│ ├─/sys/fs/pstore                            none                 pstore      rw,nosuid,nodev,noexec,relatime
│ ├─/sys/firmware/efi/efivars                 efivarfs             efivarfs    rw,nosuid,nodev,noexec,relatime
│ ├─/sys/fs/bpf                               bpf                  bpf         rw,nosuid,nodev,noexec,relatime,mode=700
│ ├─/sys/kernel/debug                         debugfs              debugfs     rw,nosuid,nodev,noexec,relatime
│ ├─/sys/kernel/tracing                       tracefs              tracefs     rw,nosuid,nodev,noexec,relatime
│ ├─/sys/fs/fuse/connections                  fusectl              fusectl     rw,nosuid,nodev,noexec,relatime
│ └─/sys/kernel/config                        configfs             configfs    rw,nosuid,nodev,noexec,relatime
├─/proc                                       proc                 proc        rw,relatime
│ └─/proc/sys/fs/binfmt_misc                  systemd-1            autofs      rw,relatime,fd=37,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=7217
│   └─/proc/sys/fs/binfmt_misc                binfmt_misc          binfmt_misc rw,nosuid,nodev,noexec,relatime
├─/dev                                        udev                 devtmpfs    rw,nosuid,relatime,size=65829492k,nr_inodes=16457373,mode=755,inode64
│ ├─/dev/pts                                  devpts               devpts      rw,nosuid,noexec,relatime,gid=5,mode=600,ptmxmode=000
│ ├─/dev/shm                                  tmpfs                tmpfs       rw,nosuid,nodev,inode64
│ ├─/dev/mqueue                               mqueue               mqueue      rw,nosuid,nodev,noexec,relatime
│ └─/dev/hugepages                            hugetlbfs            hugetlbfs   rw,nosuid,nodev,relatime,pagesize=2M
├─/run                                        tmpfs                tmpfs       rw,nosuid,nodev,noexec,relatime,size=13175316k,mode=755,inode64
│ ├─/run/lock                                 tmpfs                tmpfs       rw,nosuid,nodev,noexec,relatime,size=5120k,inode64
│ ├─/run/rpc_pipefs                           sunrpc               rpc_pipefs  rw,relatime
│ ├─/run/credentials/systemd-journald.service tmpfs                tmpfs       ro,nosuid,nodev,noexec,relatime,nosymfollow,size=1024k,nr_inodes=1024,mode=700,inode64,noswap
│ ├─/run/user/1000                            tmpfs                tmpfs       rw,nosuid,nodev,relatime,size=13175312k,nr_inodes=3293828,mode=700,uid=1000,gid=1000,inode64
│ └─/run/credentials/getty@tty1.service       tmpfs                tmpfs       ro,nosuid,nodev,noexec,relatime,nosymfollow,size=1024k,nr_inodes=1024,mode=700,inode64,noswap
├─/tmp                                        tmpfs                tmpfs       rw,nosuid,nodev,nr_inodes=1048576,inode64
├─/boot/efi                                   /dev/nvme0n1p2       vfat        rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=iso8859-1,shortname=mixed,errors=remount-ro
├─/var/lib/lxcfs                              lxcfs                fuse.lxcfs  rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other
└─/etc/pve                                    /dev/fuse            fuse        rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other



root@rpgsrv:~# pvesm status
Name                    Type     Status     Total (KiB)      Used (KiB) Available (KiB)        %
local                    dir     active        70892712         6995320        60250520    9.87%
local-lvm-SSD256     lvmthin     active       147714048        26012443       121701604   17.61%
local-lvm-SSD512     lvmthin     active       499122176        25006021       474116154    5.01%
 
could you do the following?

run "journalctl -f" in one shell, and then run "pct start 101 -debug" in another and paste the journal that is generated?

"lvs", "findmnt" and "pvesm status" would also be interesting
Are there any solutions to this problem?