LVM on iSCSI problems: metadata too large for circular buffer

joelserrano

Renowned Member
Mar 16, 2011
33
0
71
Hi everybody,

I'm having the LVM metadata space problem:

Code:
TASK ERROR: clone failed: lvcreate 'vzpve4/pve-vm-4000' error:   VG vzpve4 metadata too large for circular buffer

We actually have 3 QNAP NAS connected (iSCSI + LVM) to our Proxmox VE 3.1 Cluster. All have "free space" for metadata except for one:


Code:
root@pve1:~# vgs --units k -o vg_mda_count,vg_mda_free,vg_mda_size vzpve
  #VMda VMdaFree  VMdaSize 
      1    79.50k   252.00k
root@pve1:~#

root@pve1:~# vgs --units k -o vg_mda_count,vg_mda_free,vg_mda_size vzpve3
  #VMda VMdaFree  VMdaSize 
      1    95.00k   252.00k
root@pve1:~#

root@pve1:~# vgs --units k -o vg_mda_count,vg_mda_free,vg_mda_size vzpve4
  #VMda VMdaFree  VMdaSize 
      1        0k   252.00k
root@pve1:~#

Ok, so vzpve4 VG has its metadata space full.

Obviously its the most used NAS:

Code:
root@pve1:~# lvs | grep -c "vzpve "
174
root@pve1:~#

root@pve1:~# lvs | grep -c "vzpve3 "
108
root@pve1:~#

root@pve1:~# lvs | grep -c "vzpve4 "
457
root@pve1:~#


So, its clear that I have a problem! I've already read the related posts...

My idea is to create a new LUN and make it visible to the cluster.

To configure the LVM, I've seen in /usr/share/perl5/PVE/Storage/LVMPlugin.pm that the default "metadatasize" is 250k.

Code:
[...]
# we use --metadatasize 250k, which reseults in "pe_start = 512"
# so pe_start is aligned on a 128k boundary (advantage for SSDs)
my $cmd = ['/sbin/pvcreate', '--metadatasize', '250k', $device];
[...]

Is it safe for me to change the "250k" for "64m" for example? (I know 64Mb is a huge number, but I really don't want to deal with this ever, maybe too big?)


After changing that, do I require a pvedaemon & apache restart?

What I want to do is assign the new LUN to PVE Cluster but when I create the LVM storage I want the metadatasize to not be a problem any more.



My installation:

Code:
root@pve1:~# pveversion -v
proxmox-ve-2.6.32: 3.1-114 (running kernel: 2.6.32-26-pve)
pve-manager: 3.1-24 (running version: 3.1-24/060bd5a6)
pve-kernel-2.6.32-26-pve: 2.6.32-114
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.0-2
pve-cluster: 3.0-8
qemu-server: 3.1-8
pve-firmware: 1.0-23
libpve-common-perl: 3.0-9
libpve-access-control: 3.0-8
libpve-storage-perl: 3.0-18
pve-libspice-server1: 0.12.4-2
vncterm: 1.1-6
vzctl: 4.0-1pve4
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.4-17
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.1-1
root@pve1:~#


Any other ideas/solutions are very well welcome!!!

Thanks in advanced.

Best regards,
Joel.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!