DRBD9 LVM thin provisioning

seventh

Member
Jan 28, 2016
22
2
23
40
Good day,

I followed the guide to setup DRBD9 https://pve.proxmox.com/wiki/DRBD9 and I found a major issue with LVM thin provisioning and metadata.

If I for example follows the guide and creates the drbdthinpool with a size of 1600G.
Code:
root@proxmox:~# vgcreate drbdpool /dev/sdb1
  Physical volume "/dev/sdb1" successfully created
  Volume group "drbdpool" successfully created

root@proxmox:~# lvcreate -L 1600G -n drbdthinpool -T drbdpool
  Logical volume "drbdthinpool" created.

The metadata pool will then be auto-calculated based on chunk size, in my example it was around 108MB.

The BIG issue for me was when I was starting to restore five vzdump backups the metadata pool got filled up to 100% and after that it started to happen weird things with I/O errors.

I then started to read the manual for lvmthin, http://man7.org/linux/man-pages/man7/lvmthin.7.html

I found out the following:
  1. The recommended metadata pool size for all purposes should be at 1G.
  2. Automatic extend of thin_pool was not activated by default in /etc/lvm/lvm.conf when installing fresh PVE 4.2.
The solution for we was to include --poolmetadatasize 1G when creating drbdthinpool and enable auto extend in lvm.conf.
Code:
root@proxmox:~# lvcreate -L 1600G -n drbdthinpool -T drbdpool --poolmetadatasize 1G
  Logical volume "drbdthinpool" created.

In lvm.conf I changed thin_pool_autoextend_threshold = 100 to 80

So if I would reach 80% of usage in data or metadata pool it would automatically extend with 20%.

Why isn't lvm auto extend enabled by default in Proxmox?
In the lvmthin manual it clearly states "(Disabling automatic extension is not recommended.)"
 
Why isn't lvm auto extend enabled by default in Proxmox?
In the lvmthin manual it clearly states "(Disabling automatic extension is not recommended.)"

Because we already use all available space in our default setup.
 
I also think the admin should determine the size of the pool. Automatic extending a pool is IMHO strange.
 
Because we already use all available space in our default setup.
I know the guide says 511G for the 512G disk to fill up the data pool but the issue here isn't the data pool size, it's the metadata size. That one is also being auto extended when it reaches 80% of usage.
The thinpool can't exceed the volume group so I don't really understand why you can't enable auto extend?

If you don't want to enable auto extend in lvm.conf then at least you should consider creating a bigger metadata size in the guide.
The calculated metadata is to small, at least in my situation when restoring vzdump backups.
 
It feels like I'm not reaching out to you.... :(

If I would rephrase the topic I would remove DRBD9 from it because this is a LVM thin issue and not a DRBD issue.

At least this thread is created and other people might benefit from it!
 
  • Like
Reactions: elBradford
To bump this topic.
Imo thin_pool_autoextend_threshold = 100 is not very safe default setting.

In my setup I have thin volumes on an external MSA storage (1 per enclosure) and not the PVE inbuilt.

While it could be adminstrators responsibilty I relied on 'lvs' returning that the volumes are monitored and autoxtended until one of the thin volumes (just a single LXC container running MySQL) ran into ~99.96% Meta usage:

Code:
root@box:~# lvs
  LV  VG  Attr  LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  data  pve  twi-aotz--  78.57g  0.00  0.43
  root  pve  -wi-ao----  34.00g
  swap  pve  -wi-ao----  8.00g
..
  thin  storage-sde twi-aotzM-  1.60t  25.76  99.96
  vm-112-disk-1 storage-sde Vwi-aotz--  8.00g thin  18.89
  vm-112-disk-2 storage-sde Vwi-aotz-- 400.00g thin  66.09
...

.. and bad things started to happen:
Code:
[ 1555.442920] device-mapper: space map metadata: unable to allocate new metadata block
[ 1555.442946] device-mapper: thin: 251:3: metadata operation 'dm_thin_insert_block' failed: error = -28
[ 1555.442966] device-mapper: thin: 251:3: aborting current metadata transaction
[ 1555.445397] device-mapper: thin: 251:3: switching pool to read-only mode
[ 1555.523867] Buffer I/O error on dev dm-8, logical block 16276704, lost async page write

Obviously changing the default from 'disabled' to 'less than 100' fixed the issue.

Besides as the OP said the data volume is not the problem rather than the metadata which in my case also was just by default ~100Mb.

If anything imo it's worth for a gotcha or FAQ entry.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!