Hello,
I'm using Proxmox 6 with ZFS. I wanted to use replication to other node, but after few replications the jobs just stoped beacuse out of space. I have 1.1TB used space for VMs and total disk space is near 1.6TB.
Here is my setup and what I found.
I've created 3 VMs (on ZFS disk (mirroring)). Next step was to clone those 3 VMs (done by GUI VM->Clone->new ID).
So this looks like this (before replication)
100-102 are oryginal machines and 103-105 are cloned (100->103, 101->104, 102->105).
Everything to this point was good. Only bother me that USEDDS is equal to disk size, not the actual used data in VM like in 100-102 machines, but this was not problem for me.
And here starts the problem...
When I set replication jobs to every machine and when replication starts doing machine from 103 and higher on ZFS filesystem used space is twice (and more) bigger than oryginal space used for machine.
Here is when replication start doing 103 machine. USED now is 305GB (not 155GB).
Of course this problem is still when replication job start to replicate 104 or 105 VM.
If I delete replication job for 103, 104 and 105 machines, used space back to normal.
Is anyone have a idea what's going on?
Thanks!
I'm using Proxmox 6 with ZFS. I wanted to use replication to other node, but after few replications the jobs just stoped beacuse out of space. I have 1.1TB used space for VMs and total disk space is near 1.6TB.
Here is my setup and what I found.
I've created 3 VMs (on ZFS disk (mirroring)). Next step was to clone those 3 VMs (done by GUI VM->Clone->new ID).
So this looks like this (before replication)
Code:
zfs list -t all -r -o space VM-DISKS-SSD
NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD
VM-DISKS-SSD 502G 1.19T 0B 96K 0B 1.19T
VM-DISKS-SSD/vm-100-disk-0 654G 155G 0B 2.05G 153G 0B
VM-DISKS-SSD/vm-101-disk-0 706G 206G 0B 1.74G 205G 0B
VM-DISKS-SSD/vm-102-disk-0 706G 206G 0B 1.82G 204G 0B
VM-DISKS-SSD/vm-103-disk-0 506G 155G 0B 150G 4.38G 0B
VM-DISKS-SSD/vm-104-disk-0 507G 206G 0B 200G 5.85G 0B
VM-DISKS-SSD/vm-105-disk-0 507G 206G 0B 200G 5.85G 0B
100-102 are oryginal machines and 103-105 are cloned (100->103, 101->104, 102->105).
Everything to this point was good. Only bother me that USEDDS is equal to disk size, not the actual used data in VM like in 100-102 machines, but this was not problem for me.
And here starts the problem...
When I set replication jobs to every machine and when replication starts doing machine from 103 and higher on ZFS filesystem used space is twice (and more) bigger than oryginal space used for machine.
Here is when replication start doing 103 machine. USED now is 305GB (not 155GB).
Code:
zfs list -t all -r -o space VM-DISKS-SSD
NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD
VM-DISKS-SSD 143G 1.54T 0B 96K 0B 1.54T
VM-DISKS-SSD/vm-100-disk-0 298G 157G 1.24M 2.05G 155G 0B
VM-DISKS-SSD/vm-100-disk-0@__replicate_100-0_1600415820__ - 1.24M - - - -
VM-DISKS-SSD/vm-101-disk-0 350G 208G 1.05M 1.74G 206G 0B
VM-DISKS-SSD/vm-101-disk-0@__replicate_101-0_1600415845__ - 1.05M - - - -
VM-DISKS-SSD/vm-102-disk-0 350G 208G 1.16M 1.82G 206G 0B
VM-DISKS-SSD/vm-102-disk-0@__replicate_102-0_1600415866__ - 1.16M - - - -
VM-DISKS-SSD/vm-103-disk-0 298G 305G 2.35M 150G 155G 0B
VM-DISKS-SSD/vm-103-disk-0@__replicate_104-0_1600415909__ - 2.35M - - - -
[cuted]
Of course this problem is still when replication job start to replicate 104 or 105 VM.
If I delete replication job for 103, 104 and 105 machines, used space back to normal.
Is anyone have a idea what's going on?
Thanks!
Last edited: