Hello,
I have an Proxmox Cluster with 3 nodes, two of them uses drbd as storage. drbd was build as your wiki describe.
All seems good so far. But when i start to restore a backup, on one node the metadata percentage grows much faster then on the other node.
The logical volume drbdthinpool have the same size on both nodes:
The data percentage differs because the backup restore is still running.
But node2 have lesser data% and almost the double meta%. At my first try, the drbd was crashed because no metadata space was left, even thought
the disk had ~800G in use (from 2.6T)
Does anybody know how can I fix that?
Thank you
I have an Proxmox Cluster with 3 nodes, two of them uses drbd as storage. drbd was build as your wiki describe.
All seems good so far. But when i start to restore a backup, on one node the metadata percentage grows much faster then on the other node.
The logical volume drbdthinpool have the same size on both nodes:
Code:
[FONT=monospace][COLOR=#000000]pve1:~# lvs[/COLOR]
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
.drbdctrl_0 drbdpool -wi-ao---- 4.00m
.drbdctrl_1 drbdpool -wi-ao---- 4.00m
drbdthinpool drbdpool twi-aotz-- 2.59t 1.53 4.67
lvol0 drbdpool -wi------- 84.00m
vm-101-disk-1_00 drbdpool Vwi-aotz-- 20.01g drbdthinpool 99.98
vm-101-disk-2_00 drbdpool Vwi-aotz-- 64.02g drbdthinpool 32.12
[/FONT]
Code:
[FONT=monospace][COLOR=#000000]pve2:~# lvs[/COLOR]
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
.drbdctrl_0 drbdpool -wi-ao---- 4.00m
.drbdctrl_1 drbdpool -wi-ao---- 4.00m
drbdthinpool drbdpool twi-aotz-- 2.59t 1.37 8.72
vm-101-disk-1_00 drbdpool Vwi-aotz-- 20.01g drbdthinpool 99.98
vm-101-disk-2_00 drbdpool Vwi-aotz-- 64.02g drbdthinpool 25.43
[/FONT]
The data percentage differs because the backup restore is still running.
But node2 have lesser data% and almost the double meta%. At my first try, the drbd was crashed because no metadata space was left, even thought
the disk had ~800G in use (from 2.6T)
Does anybody know how can I fix that?
Thank you