DRBD9 metadata

draufsicht

New Member
Jun 26, 2015
3
0
1
Hello,

I have an Proxmox Cluster with 3 nodes, two of them uses drbd as storage. drbd was build as your wiki describe.
All seems good so far. But when i start to restore a backup, on one node the metadata percentage grows much faster then on the other node.
The logical volume drbdthinpool have the same size on both nodes:
Code:
[FONT=monospace][COLOR=#000000]pve1:~# lvs[/COLOR]
  LV               VG       Attr       LSize  Pool         Origin Data%  Meta%  Move Log Cpy%Sync Convert
  .drbdctrl_0      drbdpool -wi-ao----  4.00m                                                             
  .drbdctrl_1      drbdpool -wi-ao----  4.00m                                                             
  drbdthinpool     drbdpool twi-aotz--  2.59t                     1.53   4.67                             
  lvol0            drbdpool -wi------- 84.00m                                                             
  vm-101-disk-1_00 drbdpool Vwi-aotz-- 20.01g drbdthinpool        99.98                                   
  vm-101-disk-2_00 drbdpool Vwi-aotz-- 64.02g drbdthinpool        32.12 
[/FONT]
Code:
[FONT=monospace][COLOR=#000000]pve2:~# lvs[/COLOR]
  LV               VG       Attr       LSize  Pool         Origin Data%  Meta%  Move Log Cpy%Sync Convert
  .drbdctrl_0      drbdpool -wi-ao----  4.00m                                                             
  .drbdctrl_1      drbdpool -wi-ao----  4.00m                                                             
  drbdthinpool     drbdpool twi-aotz--  2.59t                     1.37   8.72                             
  vm-101-disk-1_00 drbdpool Vwi-aotz-- 20.01g drbdthinpool        99.98                                   
  vm-101-disk-2_00 drbdpool Vwi-aotz-- 64.02g drbdthinpool        25.43
[/FONT]

The data percentage differs because the backup restore is still running.
But node2 have lesser data% and almost the double meta%. At my first try, the drbd was crashed because no metadata space was left, even thought
the disk had ~800G in use (from 2.6T)

Does anybody know how can I fix that?
Thank you
 
Hello,

Here's the resolution. The Problem was different chunk size in the underlying Raid. Set both to chunk size of 512k fixed the problem.
But in my opinion the meta size grows nevertheless to fast. After a few backup restores, the logical volumes have a size of ~700G,
and the meta data percentage is almost 50%. But 700G is not the half of 2.6T.
Is this normal ?
 
Hello,

Here's the resolution. The Problem was different chunk size in the underlying Raid. Set both to chunk size of 512k fixed the problem.
But in my opinion the meta size grows nevertheless to fast. After a few backup restores, the logical volumes have a size of ~700G,
and the meta data percentage is almost 50%. But 700G is not the half of 2.6T.
Is this normal ?

Better ask DRBD9 specific question on the DRBD mailing lists.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!