DRBD9 and pvesm

resoli

Renowned Member
Mar 9, 2010
147
4
83
Hello,

I don't want to bother you with another rant on "DRBD9 reports wrong pool free space", there is already another thread on this, and I posted my fidings on drbd-user mailing list (where I detailed also the current drbd version in my setup).

I want to share here my experience with pve storage management reporting on a cluster with three nodes, configured as explained in the wiki .

So, my storage.cfg is as follows:
Code:
# cat /etc/pve/storage.cfg
dir: local
    path /var/lib/vz
    content images,rootdir,vztmpl,iso
    maxfiles 0

drbd: drbdthin
    content images,rootdir
    redundancy 3

My "drbdthinpool" thinpool size is 1600MB, on all three nodes, and as you can see redundancy is set to 3.

Nevertheless, pvesm command (and, consistently, storage gui) reports this:

Code:
# pvesm status -storage drbdthin
drbdthin   drbd 1      5032497152      4985872528        46624624 99.57%

It seems that redundancy 3 is not considered in reporting total available space, summing up the three drbdthinpool sizes.

and:
Code:
# pvesm list drbdthin
drbdthin:vm-100-disk-1   raw 10737418240 100
drbdthin:vm-101-disk-1   raw 4294967296 101
drbdthin:vm-101-disk-2   raw 64424509440 101
drbdthin:vm-102-disk-1   raw 10737418240 102
drbdthin:vm-103-disk-1   raw 10737418240 103
drbdthin:vm-104-disk-1   raw 10737418240 104
drbdthin:vm-104-disk-2   raw 966367641600 104
drbdthin:vm-120-disk-1   raw 10737418240 120
drbdthin:vm-121-disk-1   raw 10737418240 121

(As reported on drbd-user mailing list, my resources are currently 6 10GB disks, one 900GB, one 60GB and one 4GB => 1024 GB in total, while reported usage is much higher, the reason being that drbdmanage wrongly reports only 45GB free on one node).

Apart from free space reporting, i think that total available space needs a correction in PVE, or am I wrong?

Thanks,
rob
 
Last edited:
That was my feeling too, but seems that Proxmox team just ignored me thinking that "drbd9 is broken", and they could be right, just we should understand if the "redundancy" adjustment should be made by drbd tools or by proxmox ones.
In my situation, with redundancy 2, for instance I have:
Code:
# vgdisplay
  VG Name               drbdpool
  VG Size               1.46 TiB
  PE Size               4.00 MiB
  Total PE              381546
  Alloc PE / Size       371248 / 1.42 TiB
and GUI that states
Code:
Size: 2.83 TB
Used: 2.60TB
Avail: 238.62GB
also
Code:
pvesm status --storage drbd1
drbd1   drbd 1      3040870400      2790660268       250210132 92.27%
but if I query directly drbd tools, I see that they are wrong also
Code:
# drbdmanage list-free-space 2
The maximum size for a 2x redundant volume is 250210132 kiB
(Aggregate cluster storage size: 3040870400 kiB
Proxmox 4.1 and drbdmanage 0.91-1 (I'm too scared to upgrade to 4.2 right now)
Sigh!
 
Hello @mmenaz. Please note that drbdmanage reports available space correctly:
Code:
#drbdmanage list-free-space 3
The maximum size for a 3x redundant volume is 46624624 kiB
(Aggregate cluster storage size: 5032497152 kiB)

Aggregate cluster storage is in fact correct, in your case as well, i think (you have reported vg size, not thinpool size).

free-space calculation is broken, but it is broken upstream in drbd9 and doesn't make sense to take any action on PVE side.

bye,
rob
 
I don't understand you completely. I've 2x800GB SSD in each of the 2 storage node (3° node only for quorum).
That is why VG is 1.4TB (I've not fully used the theorethical 1.6TB capacity)
Code:
root@prox01:~# lvs
  LV  VG  Attr  LSize  Pool  Origin Data%  Meta%  Move Log Cpy%Sync Convert
  .drbdctrl_0  drbdpool -wi-ao----  4.00m   
  .drbdctrl_1  drbdpool -wi-ao----  4.00m   
  drbdthinpool  drbdpool twi-aotz--  1.42t  54.75  28.03
OMHO, the "maximum size of a 2x reduntant volume" should be equal to free space (here there is the problem that the storage is "thin" only on the node where the VM has been created, as you can see in my other posts aboud drbd9 problems)
Aggregate cluster storage should be the total drbdthinpool size, i.e. 1.42TB.
IF you are correct that drbd is ok, then yes, free drbd calculation is broken upstream, but total available storage is not so PVE should do the math (or drbd9 is broken in that too and 'Aggregate' is misleading and should be replaced with "total available").
In any case, I'm scared, I've my first (and only, so far) cluster with this setup and I cross my fingers that I'm not asked to add increase VM storage capacity or things badly break for some reason.
 
No, drbd redundancy is not costrained as pve one. It is perfectly ok to create a resource attaching to one node only (redundancy 1), while another one may be attached to 2 nodes or more. So aggregate space as sum of storage size on various nodes makes sense.

rob
 
No, drbd redundancy is not costrained as pve one. It is perfectly ok to create a resource attaching to one node only (redundancy 1), while another one may be attached to 2 nodes or more. So aggregate space as sum of storage size on various nodes makes sense.
rob

I forgot: "makes sense" at drbdmanage level, not at pve, where redundancy is currently fixed and configured at cluster level.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!