Hello,
I don't want to bother you with another rant on "DRBD9 reports wrong pool free space", there is already another thread on this, and I posted my fidings on drbd-user mailing list (where I detailed also the current drbd version in my setup).
I want to share here my experience with pve storage management reporting on a cluster with three nodes, configured as explained in the wiki .
So, my storage.cfg is as follows:
	
	
	
		
My "drbdthinpool" thinpool size is 1600MB, on all three nodes, and as you can see redundancy is set to 3.
Nevertheless, pvesm command (and, consistently, storage gui) reports this:
	
	
	
		
It seems that redundancy 3 is not considered in reporting total available space, summing up the three drbdthinpool sizes.
and:
	
	
	
		
(As reported on drbd-user mailing list, my resources are currently 6 10GB disks, one 900GB, one 60GB and one 4GB => 1024 GB in total, while reported usage is much higher, the reason being that drbdmanage wrongly reports only 45GB free on one node).
Apart from free space reporting, i think that total available space needs a correction in PVE, or am I wrong?
Thanks,
rob
				
			I don't want to bother you with another rant on "DRBD9 reports wrong pool free space", there is already another thread on this, and I posted my fidings on drbd-user mailing list (where I detailed also the current drbd version in my setup).
I want to share here my experience with pve storage management reporting on a cluster with three nodes, configured as explained in the wiki .
So, my storage.cfg is as follows:
		Code:
	
	# cat /etc/pve/storage.cfg
dir: local
    path /var/lib/vz
    content images,rootdir,vztmpl,iso
    maxfiles 0
drbd: drbdthin
    content images,rootdir
    redundancy 3My "drbdthinpool" thinpool size is 1600MB, on all three nodes, and as you can see redundancy is set to 3.
Nevertheless, pvesm command (and, consistently, storage gui) reports this:
		Code:
	
	# pvesm status -storage drbdthin
drbdthin   drbd 1      5032497152      4985872528        46624624 99.57%It seems that redundancy 3 is not considered in reporting total available space, summing up the three drbdthinpool sizes.
and:
		Code:
	
	# pvesm list drbdthin
drbdthin:vm-100-disk-1   raw 10737418240 100
drbdthin:vm-101-disk-1   raw 4294967296 101
drbdthin:vm-101-disk-2   raw 64424509440 101
drbdthin:vm-102-disk-1   raw 10737418240 102
drbdthin:vm-103-disk-1   raw 10737418240 103
drbdthin:vm-104-disk-1   raw 10737418240 104
drbdthin:vm-104-disk-2   raw 966367641600 104
drbdthin:vm-120-disk-1   raw 10737418240 120
drbdthin:vm-121-disk-1   raw 10737418240 121(As reported on drbd-user mailing list, my resources are currently 6 10GB disks, one 900GB, one 60GB and one 4GB => 1024 GB in total, while reported usage is much higher, the reason being that drbdmanage wrongly reports only 45GB free on one node).
Apart from free space reporting, i think that total available space needs a correction in PVE, or am I wrong?
Thanks,
rob
			
				Last edited: 
				
		
	
										
										
											
	
										
									
								 
	