Storage replication

brucexx

Renowned Member
Mar 19, 2015
235
9
83
It seems like storage replication doubles the storage for zfspool. I see it in the GUI but I don't in the CLI (the list command shows correct values).

Is this a bg in 5.1.42 ? Should I ignore it ? The install , storage and replication are ok.
 
Just to elaborate the GUI show: Usage 23.77% (85.66 GiB of 360.38 GiB) just for one VM of size and 42GB so it doubles the size of it.

The output from the zpool list shows:

root@pve02-nyc:~# zpool list
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
zfspool 372G 42.4G 330G - 2% 11% 1.00x ONLINE -

The target node shows the same.

Thank you
 
I guess these are snapshots (I am new to zfs) but that big to double the size of the original VM ?

I added second VM and it also doubled the size on the drive as see in the GUI (one replication was enabled). Is there a way to deal with it somehow that is a lot of disk space...

Any advice appreciated.
 
Hi,

which view you are talking?
The node summary or the Datacenter dashboard?
 
Hi,

which view you are talking?
The node summary or the Datacenter dashboard?


I am referring to:
Under node >> Storage select it, Summary. See attached. There are two VMs there both of size 42GB so the total storage for the two VMS is as it shows in the zpool list:

root@pve01-nyc:~# zpool list
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
zfspool 444G 76.7G 367G - 6% 17% 1.00x ONLINE -

and yet the web page shows after enabling replication at least double the size. I tried to remove -readd VMs I also tried remove the storage and re-added it, same result.

Thank you
 

Attachments

  • Capturezfs.JPG
    Capturezfs.JPG
    59.8 KB · Views: 13
Anybody to weigh in on the issue I am really curious if I messed up something with the configuration or these snapshots are indeed the size of the whole VM and taking so much space.

Thx
 
what does 'zfs list -t all' show?
 
Here is zfs list -t all

root@pve01-nyc:~# zfs list -t all
NAME USED AVAIL REFER MOUNTPOINT
zfspool 163G 267G 96K /zfspool
zfspool/vm-100-disk-1 77.6G 310G 34.3G -
zfspool/vm-100-disk-1@__replicate_100-0_1530106200__ 19.6M - 34.3G -
zfspool/vm-205-disk-1 85.7G 310G 42.3G -
zfspool/vm-205-disk-1@__replicate_205-0_1530106200__ 0B - 42.3G -


...not sure why it shows that the 100-sisk-1 is 77GB - is that normal ? This is a 42GB virtual machine. the 205-disk-1 is 42 GB virtual machine too.

Formatting got screwed up, I am uploading screenshot as well.

Any help appreciated.

Thank you
 

Attachments

  • zfs.JPG
    zfs.JPG
    46.6 KB · Views: 14
what type of zfs redundancy do you have? raidz/2? if yes then this comes from how zfs on linux reports the sizes
 
I used mirror 1 to create it. Just so you know, this does not happen until I enable Storage Replication. If storage replication is NOT enabled I can see normal (the true size) of the virtual machine 42GB (it reads a little bit more but it is around 42GB as opposed to twice as big).

EDIT - I used RAID 1 , command: zpool create -f -o ashift=12 zfspool mirror device1 device2

Let me know.

Thank you
 
Last edited:
It seems like the storage replication is doing this, if I remove the storage replication from VM 205 which resides on node 2 (pve02) with target of pve01, it shows:

root@pve02-nyc:~# zfs list
NAME USED AVAIL REFER MOUNTPOINT
zfspool 121G 239G 96K /zfspool
zfspool/vm-100-disk-1 77.6G 283G 34.3G -
zfspool/vm-205-disk-1 43.3G 240G 42.3G - after disable storage replication.
root@pve02-nyc:~#

NOTE the vm100 which is the same size still shows 77.6GB because storage replication is enabled for it on node1 (pve01) with target of PVE02, node2.

Thank you
 
Anybody on this ? can anybody confirm or compare their snapshot size to mine ? Maybe it is the way it should be. Again the "issue" is that after enabling storage replication the size on the drive for a VM doubles on the target drive as well as on the drive that the VM resides primarily.

Thank you
 
In my environment I also notice that size of allocated space almost double the size of VM disks.
From my perspective this is not an issue and it comes from my setup: I'm using ZFS storage (vm disks are datasets) with volblocksize=4k (so there is significant metadata overhead) - AFAIK
 
One more thing to be noticed: with ashift=9 you will have less wasted space with compare to ashift=12
 
Can confirm.
AFTER you configure replication, used size in ZFS pool is doubled.
ZFS is created as:
Code:
zpool create -f -o ashift=12 data mirror /dev/sda4 /dev/sdb4
3.png 2.png 1.png
As you can see, for example VM 513 disk image in GUI shows as 32GB, but in CLI, it shows as 65.2GB, doubeled, as all VM image disk usages.

Any ideas, why that is?
 
Same here,

this is a big problem for us, as we want to use ZFS Pools with replication for backup disks within VMs.

We have a ~8TB RaidZ1 pool, 2 disks, with 2TB each fills up the pool totaly, if we enable replication.

somone tried thin provisioning?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!