lvm2 issues (nohup or running from CT start script)

soal

New Member
May 18, 2012
6
0
1
www.soal.pl
Hi,

Recently I've run into weird issues about lvm2.
There are errors when running tools like "lvs", "vgs", "lvcreate", etc. from CT start script or with usage of "nohup":
Code:
root@kvmtest:~# nohup lvs >lvs.log
nohup: ignoring input and redirecting stderr to stdout
root@kvmtest:~# nohup vgs >vgs.log
nohup: ignoring input and redirecting stderr to stdout
root@kvmtest:~# cat lvs.log vgs.log 
  stdin: fdopen failed: Invalid argument
  stdin: fclose failed: Invalid argument
  stdin: fdopen failed: Invalid argument
  stdin: fdopen failed: Invalid argument
  stdin: fclose failed: Invalid argument
  stdin: fdopen failed: Invalid argument


Example script /etc/pve/openvz/100.mount:
Code:
#!/bin/bash

PATH="/bin:/usr/bin:/sbin:/usr/sbin"
lvs -vvv -d || exit $?
vgs -vvv -d || exit $?

exit 0

Messages during OpenVZ start:
Code:
root@kvmtest:~# vzctl start 100
Starting container ...
File descriptor 3 (/var/log/vzctl.log) leaked on lvs invocation. Parent PID 2733: /bin/bash
File descriptor 4 (/dev/vzctl) leaked on lvs invocation. Parent PID 2733: /bin/bash
File descriptor 5 (pipe:[35519]) leaked on lvs invocation. Parent PID 2733: /bin/bash
File descriptor 6 (pipe:[35519]) leaked on lvs invocation. Parent PID 2733: /bin/bash
File descriptor 7 (/dev/null) leaked on lvs invocation. Parent PID 2733: /bin/bash
  stdin: fdopen failed: Invalid argument
  stdin: fclose failed: Invalid argument
  stdin: fdopen failed: Invalid argument
Error executing mount script /etc/pve/openvz/100.mount

Expected output (it works without "nohup"):
Code:
root@kvmtest:~# lvs
  LV   VG   Attr      LSize Pool Origin Data%  Move Log Copy%  Convert
  data pve  -wi-ao--- 4.50g                                           
  root pve  -wi-ao--- 2.50g                                           
  swap pve  -wi-ao--- 1.25g                                           
root@kvmtest:~# vgs
  VG   #PV #LV #SN Attr   VSize VFree
  pve    1   3   0 wz--n- 9.50g 1.25g

uname -a:
Code:
Linux kvmtest 2.6.32-26-pve #1 SMP Mon Oct 14 08:22:20 CEST 2013 x86_64 GNU/Linux

lsb_release -a:
Code:
No LSB modules are available.
Distributor ID:    Debian
Description:    Debian GNU/Linux 7.2 (wheezy)
Release:    7.2
Codename:    wheezy

pveversion -v
Code:
proxmox-ve-2.6.32: 3.1-114 (running kernel: 2.6.32-26-pve)
pve-manager: 3.1-20 (running version: 3.1-20/c3aa0f1a)
pve-kernel-2.6.32-26-pve: 2.6.32-114
pve-kernel-2.6.32-23-pve: 2.6.32-109
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.0-2
pve-cluster: 3.0-8
qemu-server: 3.1-8
pve-firmware: 1.0-23
libpve-common-perl: 3.0-7
libpve-access-control: 3.0-7
libpve-storage-perl: 3.0-17
pve-libspice-server1: 0.12.4-2
vncterm: 1.1-4
vzctl: 4.0-1pve4
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.4-17
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.1-1

I've tested it on clean Proxmox 3.1 install (wheezy 7.1) running on KVM and also after packages update & upgrade (wheezy 7.2) - the same happens.
Also the same happens on production cluster (kernel pve-kernel-2.6.32-23-pve: 2.6.32-109) - only downgrading lvm2 to version 2.02.95-8 resolves the issues, but I think it's not a proper fix (eg. clvm is dependent on lvm2).

So simple steps to reproduce are:
  1. install clean Proxmox 3.1 on KVM or bare-metal
  2. nohup lvs >lvs.log ; cat lvs.log
 
Last edited:
Tell it to all users affected by this bug (I think it's pretty critical and basically affects most of the users [with LVM]).

I see only you here, maybe some other thread is crowded with other users issues, and there is some hint or solution?

Sorry, I don't have this issue, since I never do what you do in CTs

Marco