Search results

  1. B

    Moving cluster to new location

    Thanks Kalus, that is what I though as well. Anybody else want to chip in ? Any words of wisdom or "last famous words" ...before the cluster crashes :-) Thanks for any advice
  2. B

    Moving cluster to new location

    We will be moving everything so same subnets/internal ips and relatively small latency >10ms only public ips will change. The reason why we are moving is that the internet in our current datacenter is flaky and goes down more often that I would like to see, it is usually for a very short period...
  3. B

    Moving cluster to new location

    I am moving a cluster of 4 nodes to a new location. No need to work about the external storage as I can fit all the VMs on local drives. I was thinking to move one node and do expected 1 so it can work by itself in teh new location and the remaining 3 nodes would work in teh old location then...
  4. B

    Wrong stats for memory and disk usage

    As only the delta is synced to the standby node I did not thought that it will take the hole disk space twice... hmm maybe that is the snapshot that is being synced to the target ?
  5. B

    Wrong stats for memory and disk usage

    dcsapak wrote: zfs takes half of your memory by default - is this really happening ? I know zfs uses a lot of ram but half of what you have or you were just referring to Tomx1's particular case. In my case storage replication takes about double the size of replicated VM - see this post...
  6. B

    JBOD for zfs

    Is the cache the only issue ? I should have started with it but I looked at the Perc h730p documentation and found that there is an option to disable cache for non-raid disks. It is under advanced controller properties. Has anybody done that successfully and can share the results (meaning...
  7. B

    Storage replication

    anybody with similar issue or can confirm ? thx
  8. B

    JBOD for zfs

    I have a PVE installed on RAID 1 on Perc h730p - it is not feasible to move it as it is in production with too many VMs etc. I want to use storage replication so I need zfs which does not like RAID controllers in general - that is why I found doing some digging and on this forum. The only option...
  9. B

    Storage replication

    Anybody on this ? can anybody confirm or compare their snapshot size to mine ? Maybe it is the way it should be. Again the "issue" is that after enabling storage replication the size on the drive for a VM doubles on the target drive as well as on the drive that the VM resides primarily. Thank you
  10. B

    Storage replication

    It seems like the storage replication is doing this, if I remove the storage replication from VM 205 which resides on node 2 (pve02) with target of pve01, it shows: root@pve02-nyc:~# zfs list NAME USED AVAIL REFER MOUNTPOINT zfspool 121G 239G 96K...
  11. B

    Storage replication

    I used mirror 1 to create it. Just so you know, this does not happen until I enable Storage Replication. If storage replication is NOT enabled I can see normal (the true size) of the virtual machine 42GB (it reads a little bit more but it is around 42GB as opposed to twice as big). EDIT - I...
  12. B

    Storage replication

    Here is zfs list -t all root@pve01-nyc:~# zfs list -t all NAME USED AVAIL REFER MOUNTPOINT zfspool 163G 267G 96K /zfspool zfspool/vm-100-disk-1 77.6G 310G...
  13. B

    Storage replication

    Anybody to weigh in on the issue I am really curious if I messed up something with the configuration or these snapshots are indeed the size of the whole VM and taking so much space. Thx
  14. B

    Storage replication

    I am referring to: Under node >> Storage select it, Summary. See attached. There are two VMs there both of size 42GB so the total storage for the two VMS is as it shows in the zpool list: root@pve01-nyc:~# zpool list NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT...
  15. B

    Storage replication

    I guess these are snapshots (I am new to zfs) but that big to double the size of the original VM ? I added second VM and it also doubled the size on the drive as see in the GUI (one replication was enabled). Is there a way to deal with it somehow that is a lot of disk space... Any advice...
  16. B

    Storage replication

    Just to elaborate the GUI show: Usage 23.77% (85.66 GiB of 360.38 GiB) just for one VM of size and 42GB so it doubles the size of it. The output from the zpool list shows: root@pve02-nyc:~# zpool list NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT zfspool...
  17. B

    Storage replication

    It seems like storage replication doubles the storage for zfspool. I see it in the GUI but I don't in the CLI (the list command shows correct values). Is this a bg in 5.1.42 ? Should I ignore it ? The install , storage and replication are ok.
  18. B

    Is anybody doing HA between two geographical locations ?

    We have much more distance (1500 Miles) and latency of at least of max 40ms, we could get much lower latency if we get data centers from the same company (currently we have two data center from different companies) but even then we could not below 10ms if we would get 10ms to start with. Our...
  19. B

    Is anybody doing HA between two geographical locations ?

    What about other solutions ? Can for example VMware do this reliably (two geographical locations) , does anybody know ? ...or it needs some crazy requirements to work. Thank you

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!