Prox7, lvmthin: LV after reboot inactive

maxprox

Renowned Member
Aug 23, 2011
423
58
93
Germany - Nordhessen
fair-comp.de
Hello
I have a new small proxmox pve System on an old ACER / Gateway Server with an entry level ACER / Gigabyte Mainboard.
There is only one Container which does not start. A LVMthin Install, no ZFS.
The last thing I have done was to clone the Proxmox System SSD to a second and same SSD, 1:1; disk-to-disk, with Clonezilla.
Perhaps Conlezilla deactivate some lVMs ....
Before the Cloning the lxc VM works fine.
I don't know, what I can do to solve this problem.
Activate the two inactiv LVs does not work.

This is the message I get:

Bash:
22:04root@proxbackup~# pct start 800
run_buffer: 316 Script exited with status 2
lxc_init: 816 Failed to run lxc.hook.pre-start for container "800"
__lxc_start: 2007 Failed to initialize container "800"
startup for container '800' failed
## and EDIT:
22:45root@proxbackup~# lxc-start -n 800 -F -l DEBUG -o lxc-800_v1.log
lxc-start 800 20210726204552.823 INFO     confile - confile.c:set_config_idmaps:2092 - Read uid map: type u nsid 0 hostid 100000 range 65536
lxc-start 800 20210726204552.823 INFO     confile - confile.c:set_config_idmaps:2092 - Read uid map: type g nsid 0 hostid 100000 range 65536
lxc-start 800 20210726204552.823 INFO     lsm - lsm/lsm.c:lsm_init_static:40 - Initialized LSM security driver AppArmor
lxc-start 800 20210726204552.823 INFO     conf - conf.c:run_script_argv:332 - Executing script "/usr/share/lxc/hooks/lxc-pve-prestart-hook" for container "800", config section "lxc"
lxc-start 800 20210726204553.907 DEBUG    conf - conf.c:run_buffer:305 - Script exec /usr/share/lxc/hooks/lxc-pve-prestart-hook 800 lxc pre-start produced output: failed to get device path

lxc-start 800 20210726204553.921 ERROR    conf - conf.c:run_buffer:316 - Script exited with status 2
lxc-start 800 20210726204553.921 ERROR    start - start.c:lxc_init:816 - Failed to run lxc.hook.pre-start for container "800"
lxc-start 800 20210726204553.921 ERROR    start - start.c:__lxc_start:2007 - Failed to initialize container "800"
lxc-start 800 20210726204553.921 INFO     conf - conf.c:run_script_argv:332 - Executing script "/usr/share/lxc/hooks/lxc-pve-poststop-hook" for container "800", config section "lxc"
lxc-start 800 20210726204554.664 DEBUG    conf - conf.c:run_buffer:305 - Script exec /usr/share/lxc/hooks/lxc-pve-poststop-hook 800 lxc post-stop produced output: umount: /var/lib/lxc/.pve-staged-mounts/mp2: not mounted.

lxc-start 800 20210726204554.664 DEBUG    conf - conf.c:run_buffer:305 - Script exec /usr/share/lxc/hooks/lxc-pve-poststop-hook 800 lxc post-stop produced output: command 'umount -- /var/lib/lxc/.pve-staged-mounts/mp2' failed: exit code 32

lxc-start 800 20210726204554.716 DEBUG    conf - conf.c:run_buffer:305 - Script exec /usr/share/lxc/hooks/lxc-pve-poststop-hook 800 lxc post-stop produced output: umount: /var/lib/lxc/.pve-staged-mounts/mp1: not mounted.

lxc-start 800 20210726204554.717 DEBUG    conf - conf.c:run_buffer:305 - Script exec /usr/share/lxc/hooks/lxc-pve-poststop-hook 800 lxc post-stop produced output: command 'umount -- /var/lib/lxc/.pve-staged-mounts/mp1' failed: exit code 32

lxc-start 800 20210726204554.747 INFO     conf - conf.c:run_script_argv:332 - Executing script "/usr/share/lxcfs/lxc.reboot.hook" for container "800", config section "lxc"
lxc-start 800 20210726204555.249 ERROR    lxc_start - tools/lxc_start.c:main:308 - The container failed to start
lxc-start 800 20210726204555.249 ERROR    lxc_start - tools/lxc_start.c:main:313 - Additional information can be obtained by setting the --logfile and --logpriority options
22:46root@proxbackup~#

the config
Bash:
22:05root@proxbackup~# pct config 800
arch: amd64
cores: 2
hostname: pbs
memory: 4096
mp0: thinpool4t:vm-800-disk-0,mp=/srv/pbs4tpool,replicate=0,size=3600G
mp1: thinpool2tbx:vm-800-disk-0,mp=/srv/pbs2tbxpool,replicate=0,size=1800G
mp2: thinpool2twx:vm-800-disk-0,mp=/srv/pbs2twxpool,replicate=0,size=1800G
nameserver: 10.16.0.1 1.1.1.1
net0: name=eth0,bridge=vmbr0,firewall=1,gw=10.18.100.252,hwaddr=66:46:72:8D:xx:xx,ip=10.18.100.21/16,type=veth
ostype: debian
rootfs: local:800/vm-800-disk-0.raw,size=18G
searchdomain: bs-xx.llan
swap: 2048
unprivileged: 1

The storage.cfg
Bash:
22:03root@proxbackup~# cat /etc/pve/storage.cfg
dir: local
    path /var/lib/vz
    content images,iso,backup,rootdir,snippets,vztmpl
    prune-backups keep-all=1

lvmthin: thinpool4t
    thinpool thinpool4t
    vgname thinpool4t
    content rootdir,images
    nodes proxbackup

lvmthin: thinpool2tbx
    thinpool thinpool2tbx
    vgname thinpool2tbx
    content rootdir,images
    nodes proxbackup

lvmthin: thinpool2twx
    thinpool thinpool2twx
    vgname thinpool2twx
    content rootdir,images
    nodes proxbackup

Some LVM Infos:
Bash:
22:03root@proxbackup~# lvscan
  ACTIVE            '/dev/thinpool2twx/thinpool2twx' [<1,79 TiB] inherit
  ACTIVE            '/dev/thinpool2twx/vm-800-disk-0' [<1,76 TiB] inherit
  inactive          '/dev/thinpool2tbx/thinpool2tbx' [<1,79 TiB] inherit
  inactive          '/dev/thinpool2tbx/vm-800-disk-0' [<1,76 TiB] inherit
  ACTIVE            '/dev/thinpool4t/thinpool4t' [<3,61 TiB] inherit
  ACTIVE            '/dev/thinpool4t/vm-800-disk-1' [<3,52 TiB] inherit
  ACTIVE            '/dev/thinpool4t/vm-800-disk-0' [<3,52 TiB] inherit
22:03root@proxbackup~# vgs
  VG           #PV #LV #SN Attr   VSize  VFree
  thinpool2tbx   1   2   0 wz--n- <1,82t 512,00m
  thinpool2twx   1   2   0 wz--n- <1,82t 512,00m
  thinpool4t     1   3   0 wz--n- <3,64t 512,00m
22:04root@proxbackup~# lvs
  LV            VG           Attr       LSize  Pool         Origin Data%  Meta%  Move Log Cpy%Sync Convert
  thinpool2tbx  thinpool2tbx twi---tz-- <1,79t                                                      
  vm-800-disk-0 thinpool2tbx Vwi---tz-- <1,76t thinpool2tbx                                          
  thinpool2twx  thinpool2twx twi-aotz-- <1,79t                     5,63   0,46                      
  vm-800-disk-0 thinpool2twx Vwi-a-tz-- <1,76t thinpool2twx        5,72                              
  thinpool4t    thinpool4t   twi-aotz-- <3,61t                     5,27   0,79                      
  vm-800-disk-0 thinpool4t   Vwi-a-tz-- <3,52t thinpool4t          5,38                              
  vm-800-disk-1 thinpool4t   Vwi-a-tz-- <3,52t thinpool4t          0,03

And the PVE Version

Code:
22:02root@proxbackup~# pveversion -v
proxmox-ve: 7.0-2 (running kernel: 5.11.22-3-pve)
pve-manager: 7.0-10 (running version: 7.0-10/d2f465d3)
pve-kernel-5.11: 7.0-6
pve-kernel-helper: 7.0-6
pve-kernel-5.11.22-3-pve: 5.11.22-5
pve-kernel-5.11.22-1-pve: 5.11.22-2
ceph-fuse: 14.2.21-1
corosync: 3.1.2-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx3
libjs-extjs: 7.0.0-1
libknet1: 1.21-pve1
libproxmox-acme-perl: 1.2.0
libproxmox-backup-qemu0: 1.2.0-1
libpve-access-control: 7.0-4
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.0-5
libpve-guest-common-perl: 4.0-2
libpve-http-server-perl: 4.0-2
libpve-storage-perl: 7.0-9
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.9-4
lxcfs: 4.0.8-pve2
novnc-pve: 1.2.0-3
proxmox-backup-client: 2.0.7-1
proxmox-backup-file-restore: 2.0.7-1
proxmox-mini-journalreader: 1.2-1
proxmox-widget-toolkit: 3.3-5
pve-cluster: 7.0-3
pve-container: 4.0-8
pve-docs: 7.0-5
pve-edk2-firmware: 3.20200531-1
pve-firewall: 4.2-2
pve-firmware: 3.2-4
pve-ha-manager: 3.3-1
pve-i18n: 2.4-1
pve-qemu-kvm: 6.0.0-2
pve-xtermjs: 4.12.0-1
qemu-server: 7.0-11
smartmontools: 7.2-1
spiceterm: 3.2-2
vncterm: 1.7-1
zfsutils-linux: 2.0.5-pve1

Any Ideas ?
regards, maxprox
 
Last edited:
perhaps the problem is solved found:

I had a deeper look to the LVM. The ''lvscan'' shows me two inactive LVs, and the ''vgchange -ay" is prohibited at first because a lv_tmeta in second because a lv_tdata.
After I deactivate both, the lv_tmeta and the lv_tdata, I can activat needed main LV

for understand what I mean have a look at my commands:

Bash:
3:31root@proxbackup~# lvchange -ay thinpool2tbx
  Activation of logical volume thinpool2tbx/thinpool2tbx is prohibited while logical volume thinpool2tbx/thinpool2tbx_tmeta is active.
  Activation of logical volume thinpool2tbx/vm-800-disk-0 is prohibited while logical volume thinpool2tbx/thinpool2tbx_tmeta is active.

23:33root@proxbackup~# lvchange -a n thinpool2tbx/thinpool2tbx_tmeta

23:33root@proxbackup~# lvchange -ay thinpool2tbx
  Activation of logical volume thinpool2tbx/thinpool2tbx is prohibited while logical volume thinpool2tbx/thinpool2tbx_tdata is active.
  Activation of logical volume thinpool2tbx/vm-800-disk-0 is prohibited while logical volume thinpool2tbx/thinpool2tbx_tdata is active.

23:33root@proxbackup~# lvchange -a n thinpool2tbx/thinpool2tbx_tdata

23:34root@proxbackup~# lvchange -ay thinpool2tbx
23:35root@proxbackup~# vgchange -ay
 ## at last both without errors

23:38root@proxbackup~# lvscan
  ACTIVE            '/dev/thinpool2twx/thinpool2twx' [<1,79 TiB] inherit
  ACTIVE            '/dev/thinpool2twx/vm-800-disk-0' [<1,76 TiB] inherit
  ACTIVE            '/dev/thinpool2tbx/thinpool2tbx' [<1,79 TiB] inherit
  ACTIVE            '/dev/thinpool2tbx/vm-800-disk-0' [<1,76 TiB] inherit
  ACTIVE            '/dev/thinpool4t/thinpool4t' [<3,61 TiB] inherit
  ACTIVE            '/dev/thinpool4t/vm-800-disk-1' [<3,52 TiB] inherit
  ACTIVE            '/dev/thinpool4t/vm-800-disk-0' [<3,52 TiB] inherit

And now also the Container is starting again.

EDIT:
BUT ONLY TILL THE NEXT REBOOT !!!
After a reboot I get the same fault / ERROR again ....
(again there are two inactive LVs, see above, first post)

any Ideas?
regards , maxprox
 
Last edited:
I am in the same camp. I have 2 identical 12TB HDD's. One loads up fine on every reboot, the other I need to manually deactivate/activate. Works great until I reboot the server.
 
Just as an update, I still have to do the same workaround as described by maxprox. After every reboot, one drive in particular I need to run lvchange -an on, but I need to wait about 5 minutes or so after Proxmox is done booting before it'll let me deactivate the volume. Then I can run lvchange -ay and after about 5 minutes everything is working again.

Nothing really unique about the drive. It's the same model as another drive I have in there. Both it and it's twin are each just used as slow storage drives. But it's always that one specific drive I gotta deactivate/reactivate after reboots.