Thanks Kalus, that is what I though as well.
Anybody else want to chip in ? Any words of wisdom or "last famous words" ...before the cluster crashes :-)
Thanks for any advice
We will be moving everything so same subnets/internal ips and relatively small latency >10ms only public ips will change. The reason why we are moving is that the internet in our current datacenter is flaky and goes down more often that I would like to see, it is usually for a very short period...
I am moving a cluster of 4 nodes to a new location. No need to work about the external storage as I can fit all the VMs on local drives. I was thinking to move one node and do expected 1 so it can work by itself in teh new location and the remaining 3 nodes would work in teh old location then...
As only the delta is synced to the standby node I did not thought that it will take the hole disk space twice... hmm
maybe that is the snapshot that is being synced to the target ?
dcsapak wrote: zfs takes half of your memory by default - is this really happening ? I know zfs uses a lot of ram but half of what you have or you were just referring to Tomx1's particular case.
In my case storage replication takes about double the size of replicated VM - see this post...
Is the cache the only issue ?
I should have started with it but I looked at the Perc h730p documentation and found that there is an option to disable cache for non-raid disks. It is under advanced controller properties. Has anybody done that successfully and can share the results (meaning...
I have a PVE installed on RAID 1 on Perc h730p - it is not feasible to move it as it is in production with too many VMs etc. I want to use storage replication so I need zfs which does not like RAID controllers in general - that is why I found doing some digging and on this forum. The only option...
Anybody on this ? can anybody confirm or compare their snapshot size to mine ? Maybe it is the way it should be. Again the "issue" is that after enabling storage replication the size on the drive for a VM doubles on the target drive as well as on the drive that the VM resides primarily.
Thank you
It seems like the storage replication is doing this, if I remove the storage replication from VM 205 which resides on node 2 (pve02) with target of pve01, it shows:
root@pve02-nyc:~# zfs list
NAME USED AVAIL REFER MOUNTPOINT
zfspool 121G 239G 96K...
I used mirror 1 to create it. Just so you know, this does not happen until I enable Storage Replication. If storage replication is NOT enabled I can see normal (the true size) of the virtual machine 42GB (it reads a little bit more but it is around 42GB as opposed to twice as big).
EDIT - I...
Here is zfs list -t all
root@pve01-nyc:~# zfs list -t all
NAME USED AVAIL REFER MOUNTPOINT
zfspool 163G 267G 96K /zfspool
zfspool/vm-100-disk-1 77.6G 310G...
Anybody to weigh in on the issue I am really curious if I messed up something with the configuration or these snapshots are indeed the size of the whole VM and taking so much space.
Thx
I am referring to:
Under node >> Storage select it, Summary. See attached. There are two VMs there both of size 42GB so the total storage for the two VMS is as it shows in the zpool list:
root@pve01-nyc:~# zpool list
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT...
I guess these are snapshots (I am new to zfs) but that big to double the size of the original VM ?
I added second VM and it also doubled the size on the drive as see in the GUI (one replication was enabled). Is there a way to deal with it somehow that is a lot of disk space...
Any advice...
Just to elaborate the GUI show: Usage 23.77% (85.66 GiB of 360.38 GiB) just for one VM of size and 42GB so it doubles the size of it.
The output from the zpool list shows:
root@pve02-nyc:~# zpool list
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
zfspool...
It seems like storage replication doubles the storage for zfspool. I see it in the GUI but I don't in the CLI (the list command shows correct values).
Is this a bg in 5.1.42 ? Should I ignore it ? The install , storage and replication are ok.
We have much more distance (1500 Miles) and latency of at least of max 40ms, we could get much lower latency if we get data centers from the same company (currently we have two data center from different companies) but even then we could not below 10ms if we would get 10ms to start with.
Our...
What about other solutions ? Can for example VMware do this reliably (two geographical locations) , does anybody know ? ...or it needs some crazy requirements to work.
Thank you
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.