X
xrameau
Guest
Hi,
having updated from 1.9 to 2.0RC1 yesterday, I've discovered an issue on my install (see below for all details).
In fact, when I try to add a VG for storage, all my VG was prefixed with a 'vgs ' in the dropdown list.
Ad this invalidated every VG I'll try to add with the GUI.
I think that this is due to my specific lvm2 configuration ( command_names=1 in log section), but all my supervision scripts have been written with this (as for multiple metadata).
Also, please take care that when we upgrade from a prevision installation, pve-cluster can't be started because of an error when it try to mount /etc/pve in fuse.
(All previous configuration still present in /etc/pve, and so fuse refuse to mount on a nonempty directory)
So, for the first issue, I've patched some files (maybe not the best way to do it), here the patch :
CluOne:/usr/share/perl5/PVE# diff -urN Storage.pm.old Storage.pm
--- Storage.pm.old 2012-03-04 08:22:27.000000000 +0100
+++ Storage.pm 2012-03-03 21:51:34.000000000 +0100
@@ -1437,6 +1437,8 @@
my $line = shift;
$line = trim($line);
+ $line =~ s/^vgs//;
+ $line = trim($line);
my ($name, $size, $free) = split (':', $line);
@@ -1466,6 +1468,8 @@
my $line = shift;
$line = trim($line);
+ $line =~ s/^lvs//;
+ $line = trim($line);
my ($vg, $name, $size, $uuid, $tags) = split (':', $line);
CluOne:/usr/share/perl5/PVE#
Installation details :
Distribution Debian : Old lenny upgraded to squeeze
CluOne:/usr/share/perl5/PVE# pveversion --verbose
pve-manager: 2.0-38 (pve-manager/2.0/af81df02)
running kernel: 2.6.32-7-pve
proxmox-ve-2.6.32: 2.0-60
pve-kernel-2.6.32-6-pve: 2.6.32-55+ovzfix-1
pve-kernel-2.6.32-7-pve: 2.6.32-60
lvm2: 2.02.88-2pve1
clvm: 2.02.88-2pve1
corosync-pve: 1.4.1-1
openais-pve: 1.1.4-2
libqb: 0.10.1-2
redhat-cluster-pve: 3.1.8-3
resource-agents-pve: 3.9.2-3
fence-agents-pve: 3.1.7-1
pve-cluster: 1.0-23
qemu-server: 2.0-25
pve-firmware: 1.0-15
libpve-common-perl: 1.0-17
libpve-access-control: 1.0-17
libpve-storage-perl: 2.0-12
vncterm: 1.0-2
vzctl: 3.0.30-2pve1
vzprocps: 2.0.11-2
vzquota: 3.0.12-3
pve-qemu-kvm: 1.0-5
ksm-control-daemon: 1.1-1
CluOne:/usr/share/perl5/PVE#
CluOne:/usr/share/perl5/PVE# dpkg -S $(which vgs lvs)
lvm2: /sbin/vgs
lvm2: /sbin/lvs
CluOne:/usr/share/perl5/PVE# dpkg -s lvm2
Package: lvm2
Status: install ok installed
Priority: optional
Section: admin
Installed-Size: 1324
Maintainer: Proxmox Support Team <support@proxmox.com>
Architecture: amd64
Version: 2.02.88-2pve1
Depends: libc6 (>= 2.3), libdevmapper1.02.1 (>= 2:1.02.67), libreadline6 (>= 6.0), libudev0 (>= 0.140), lsb-base, dmsetup (>> 2:1.02.47), initscripts (>= 2.88dsf-13.1)
Conffiles:
/etc/init.d/lvm2 5ca94667eddb105054a69b6ae84ceed5
/etc/lvm/lvm.conf 94f76247bae453d35dc2f2425cdcbf62
Description: Linux Logical Volume Manager
This is LVM2, the rewrite of The Linux Logical Volume Manager. LVM
supports enterprise level volume management of disk and disk subsystems
by grouping arbitrary disks into volume groups. The total capacity of
volume groups can be allocated to logical volumes, which are accessed as
regular block devices.
Homepage: http://sources.redhat.com/lvm2/
CluOne:/usr/share/perl5/PVE# egrep '^deb ' /etc/apt/sources.list
deb http://ftp.fr.debian.org/debian/ squeeze main contrib non-free
deb http://security.debian.org/ squeeze/updates main contrib non-free
deb http://download.proxmox.com/debian squeeze pve
CluOne:/usr/share/perl5/PVE#
LVM configuration :
CluOne:/usr/share/perl5/PVE# egrep -v '^ *(#|$)' /etc/lvm/lvm.conf
devices {
dir = "/dev"
scan = [ "/dev" ]
obtain_device_list_from_udev = 1
preferred_names = [ ]
filter = [ "a|^/dev/cciss/c0d0p[0-9]*|", "a|^/dev/sda[0-9]*|", "r/.*/" ]
cache_dir = "/etc/lvm/cache"
cache_file_prefix = ""
write_cache_state = 1
sysfs_scan = 1
md_component_detection = 1
md_chunk_alignment = 1
data_alignment_detection = 1
data_alignment = 0
data_alignment_offset_detection = 1
ignore_suspended_devices = 0
disable_after_error_count = 0
require_restorefile_with_uuid = 1
pv_min_size = 2048
issue_discards = 0
}
log {
verbose = 0
syslog = 1
file = "/var/log/lvm2.log"
overwrite = 0
level = 3
indent = 1
command_names = 0
prefix = " "
activation = 1
}
backup {
backup = 1
backup_dir = "/etc/lvm/backup"
archive = 1
archive_dir = "/etc/lvm/archive"
retain_min = 10
retain_days = 30
}
shell {
history_size = 100
}
global {
umask = 077
test = 0
units = "h"
si_unit_consistency = 1
activation = 1
format = "lvm2"
proc = "/proc"
locking_type = 1
wait_for_locks = 1
fallback_to_clustered_locking = 1
fallback_to_local_locking = 1
locking_dir = "/var/lock/lvm"
prioritise_write_locks = 1
abort_on_internal_errors = 0
detect_internal_vg_cache_corruption = 0
metadata_read_only = 0
mirror_segtype_default = "mirror"
}
activation {
checks = 0
udev_sync = 1
udev_rules = 1
verify_udev_operations = 0
missing_stripe_filler = "error"
reserved_stack = 256
reserved_memory = 8192
process_priority = -18
mirror_region_size = 512
readahead = "auto"
mirror_log_fault_policy = "allocate"
mirror_image_fault_policy = "remove"
snapshot_autoextend_threshold = 100
snapshot_autoextend_percent = 20
use_mlockall = 0
monitoring = 1
polling_interval = 15
}
metadata {
pvmetadatacopies = 2
}
dmeventd {
mirror_library = "libdevmapper-event-lvm2mirror.so"
snapshot_library = "libdevmapper-event-lvm2snapshot.so"
}
CluOne:/usr/share/perl5/PVE#
Regards,
having updated from 1.9 to 2.0RC1 yesterday, I've discovered an issue on my install (see below for all details).
In fact, when I try to add a VG for storage, all my VG was prefixed with a 'vgs ' in the dropdown list.
Ad this invalidated every VG I'll try to add with the GUI.
I think that this is due to my specific lvm2 configuration ( command_names=1 in log section), but all my supervision scripts have been written with this (as for multiple metadata).
Also, please take care that when we upgrade from a prevision installation, pve-cluster can't be started because of an error when it try to mount /etc/pve in fuse.
(All previous configuration still present in /etc/pve, and so fuse refuse to mount on a nonempty directory)
So, for the first issue, I've patched some files (maybe not the best way to do it), here the patch :
CluOne:/usr/share/perl5/PVE# diff -urN Storage.pm.old Storage.pm
--- Storage.pm.old 2012-03-04 08:22:27.000000000 +0100
+++ Storage.pm 2012-03-03 21:51:34.000000000 +0100
@@ -1437,6 +1437,8 @@
my $line = shift;
$line = trim($line);
+ $line =~ s/^vgs//;
+ $line = trim($line);
my ($name, $size, $free) = split (':', $line);
@@ -1466,6 +1468,8 @@
my $line = shift;
$line = trim($line);
+ $line =~ s/^lvs//;
+ $line = trim($line);
my ($vg, $name, $size, $uuid, $tags) = split (':', $line);
CluOne:/usr/share/perl5/PVE#
Installation details :
Distribution Debian : Old lenny upgraded to squeeze
CluOne:/usr/share/perl5/PVE# pveversion --verbose
pve-manager: 2.0-38 (pve-manager/2.0/af81df02)
running kernel: 2.6.32-7-pve
proxmox-ve-2.6.32: 2.0-60
pve-kernel-2.6.32-6-pve: 2.6.32-55+ovzfix-1
pve-kernel-2.6.32-7-pve: 2.6.32-60
lvm2: 2.02.88-2pve1
clvm: 2.02.88-2pve1
corosync-pve: 1.4.1-1
openais-pve: 1.1.4-2
libqb: 0.10.1-2
redhat-cluster-pve: 3.1.8-3
resource-agents-pve: 3.9.2-3
fence-agents-pve: 3.1.7-1
pve-cluster: 1.0-23
qemu-server: 2.0-25
pve-firmware: 1.0-15
libpve-common-perl: 1.0-17
libpve-access-control: 1.0-17
libpve-storage-perl: 2.0-12
vncterm: 1.0-2
vzctl: 3.0.30-2pve1
vzprocps: 2.0.11-2
vzquota: 3.0.12-3
pve-qemu-kvm: 1.0-5
ksm-control-daemon: 1.1-1
CluOne:/usr/share/perl5/PVE#
CluOne:/usr/share/perl5/PVE# dpkg -S $(which vgs lvs)
lvm2: /sbin/vgs
lvm2: /sbin/lvs
CluOne:/usr/share/perl5/PVE# dpkg -s lvm2
Package: lvm2
Status: install ok installed
Priority: optional
Section: admin
Installed-Size: 1324
Maintainer: Proxmox Support Team <support@proxmox.com>
Architecture: amd64
Version: 2.02.88-2pve1
Depends: libc6 (>= 2.3), libdevmapper1.02.1 (>= 2:1.02.67), libreadline6 (>= 6.0), libudev0 (>= 0.140), lsb-base, dmsetup (>> 2:1.02.47), initscripts (>= 2.88dsf-13.1)
Conffiles:
/etc/init.d/lvm2 5ca94667eddb105054a69b6ae84ceed5
/etc/lvm/lvm.conf 94f76247bae453d35dc2f2425cdcbf62
Description: Linux Logical Volume Manager
This is LVM2, the rewrite of The Linux Logical Volume Manager. LVM
supports enterprise level volume management of disk and disk subsystems
by grouping arbitrary disks into volume groups. The total capacity of
volume groups can be allocated to logical volumes, which are accessed as
regular block devices.
Homepage: http://sources.redhat.com/lvm2/
CluOne:/usr/share/perl5/PVE# egrep '^deb ' /etc/apt/sources.list
deb http://ftp.fr.debian.org/debian/ squeeze main contrib non-free
deb http://security.debian.org/ squeeze/updates main contrib non-free
deb http://download.proxmox.com/debian squeeze pve
CluOne:/usr/share/perl5/PVE#
LVM configuration :
CluOne:/usr/share/perl5/PVE# egrep -v '^ *(#|$)' /etc/lvm/lvm.conf
devices {
dir = "/dev"
scan = [ "/dev" ]
obtain_device_list_from_udev = 1
preferred_names = [ ]
filter = [ "a|^/dev/cciss/c0d0p[0-9]*|", "a|^/dev/sda[0-9]*|", "r/.*/" ]
cache_dir = "/etc/lvm/cache"
cache_file_prefix = ""
write_cache_state = 1
sysfs_scan = 1
md_component_detection = 1
md_chunk_alignment = 1
data_alignment_detection = 1
data_alignment = 0
data_alignment_offset_detection = 1
ignore_suspended_devices = 0
disable_after_error_count = 0
require_restorefile_with_uuid = 1
pv_min_size = 2048
issue_discards = 0
}
log {
verbose = 0
syslog = 1
file = "/var/log/lvm2.log"
overwrite = 0
level = 3
indent = 1
command_names = 0
prefix = " "
activation = 1
}
backup {
backup = 1
backup_dir = "/etc/lvm/backup"
archive = 1
archive_dir = "/etc/lvm/archive"
retain_min = 10
retain_days = 30
}
shell {
history_size = 100
}
global {
umask = 077
test = 0
units = "h"
si_unit_consistency = 1
activation = 1
format = "lvm2"
proc = "/proc"
locking_type = 1
wait_for_locks = 1
fallback_to_clustered_locking = 1
fallback_to_local_locking = 1
locking_dir = "/var/lock/lvm"
prioritise_write_locks = 1
abort_on_internal_errors = 0
detect_internal_vg_cache_corruption = 0
metadata_read_only = 0
mirror_segtype_default = "mirror"
}
activation {
checks = 0
udev_sync = 1
udev_rules = 1
verify_udev_operations = 0
missing_stripe_filler = "error"
reserved_stack = 256
reserved_memory = 8192
process_priority = -18
mirror_region_size = 512
readahead = "auto"
mirror_log_fault_policy = "allocate"
mirror_image_fault_policy = "remove"
snapshot_autoextend_threshold = 100
snapshot_autoextend_percent = 20
use_mlockall = 0
monitoring = 1
polling_interval = 15
}
metadata {
pvmetadatacopies = 2
}
dmeventd {
mirror_library = "libdevmapper-event-lvm2mirror.so"
snapshot_library = "libdevmapper-event-lvm2snapshot.so"
}
CluOne:/usr/share/perl5/PVE#
Regards,