Search results

  1. B

    9000 MTU size on 1GB interface

    Let me just clarify I am using Proxmox backup to backup VMs to a external NFS storage. I guess I am using the vmbr0 for backup as "default" network which is also my cluster/corosync network , I was not aware that backup (theProxmox backup) network assignment can be changed. I dont see any type...
  2. B

    Changing a network card

    Thx - will give it a try...
  3. B

    9000 MTU size on 1GB interface

    W have 3 networks with 3 separate dual port for each on bonded interfaces: 1. PVE Cluster 1Gbps, 2. Client facing network for VMs 1Gbps 3. Ceph Public - 10 Gbps I do have 9000 MTU on the 10Gbps connection for Ceph but my question is really if it makes sense to put the 9000 MTU on the 1 Gbps...
  4. B

    Changing a network card

    huh... , we broke a port on the card by pulling the cable thinking it was unplugged already just stuck, it was stuck because it was not unplugged. It is working and we just made sure this cable is not moving with some tape :p... We want to replace it anyway just in case somebody pulls on it...
  5. B

    Moving cluster to new location

    hi udo, We are using ceph but moving all local hard drives to move the ceph servers as well ahead of time. I already did the expected -1 and the separated node is back up and operational. All seems to be working now in both locations. From the original location we will be moving 3 nodes at...
  6. B

    Moving cluster to new location

    Again I am moving our 4 node cluster to a new location. I though I could just move one node (node4) to the new location and slowly move the VMs over a period of two weeks to that node. Then move the remaining 3 nodes and they would resync. I am worried about the resyncing, I see the from teh...
  7. B

    Moving cluster to new location

    Thanks Kalus, that is what I though as well. Anybody else want to chip in ? Any words of wisdom or "last famous words" ...before the cluster crashes :-) Thanks for any advice
  8. B

    Moving cluster to new location

    We will be moving everything so same subnets/internal ips and relatively small latency >10ms only public ips will change. The reason why we are moving is that the internet in our current datacenter is flaky and goes down more often that I would like to see, it is usually for a very short period...
  9. B

    Moving cluster to new location

    I am moving a cluster of 4 nodes to a new location. No need to work about the external storage as I can fit all the VMs on local drives. I was thinking to move one node and do expected 1 so it can work by itself in teh new location and the remaining 3 nodes would work in teh old location then...
  10. B

    Wrong stats for memory and disk usage

    As only the delta is synced to the standby node I did not thought that it will take the hole disk space twice... hmm maybe that is the snapshot that is being synced to the target ?
  11. B

    Wrong stats for memory and disk usage

    dcsapak wrote: zfs takes half of your memory by default - is this really happening ? I know zfs uses a lot of ram but half of what you have or you were just referring to Tomx1's particular case. In my case storage replication takes about double the size of replicated VM - see this post...
  12. B

    JBOD for zfs

    Is the cache the only issue ? I should have started with it but I looked at the Perc h730p documentation and found that there is an option to disable cache for non-raid disks. It is under advanced controller properties. Has anybody done that successfully and can share the results (meaning...
  13. B

    Storage replication

    anybody with similar issue or can confirm ? thx
  14. B

    JBOD for zfs

    I have a PVE installed on RAID 1 on Perc h730p - it is not feasible to move it as it is in production with too many VMs etc. I want to use storage replication so I need zfs which does not like RAID controllers in general - that is why I found doing some digging and on this forum. The only option...
  15. B

    Storage replication

    Anybody on this ? can anybody confirm or compare their snapshot size to mine ? Maybe it is the way it should be. Again the "issue" is that after enabling storage replication the size on the drive for a VM doubles on the target drive as well as on the drive that the VM resides primarily. Thank you
  16. B

    Storage replication

    It seems like the storage replication is doing this, if I remove the storage replication from VM 205 which resides on node 2 (pve02) with target of pve01, it shows: root@pve02-nyc:~# zfs list NAME USED AVAIL REFER MOUNTPOINT zfspool 121G 239G 96K...
  17. B

    Storage replication

    I used mirror 1 to create it. Just so you know, this does not happen until I enable Storage Replication. If storage replication is NOT enabled I can see normal (the true size) of the virtual machine 42GB (it reads a little bit more but it is around 42GB as opposed to twice as big). EDIT - I...
  18. B

    Storage replication

    Here is zfs list -t all root@pve01-nyc:~# zfs list -t all NAME USED AVAIL REFER MOUNTPOINT zfspool 163G 267G 96K /zfspool zfspool/vm-100-disk-1 77.6G 310G...
  19. B

    Storage replication

    Anybody to weigh in on the issue I am really curious if I messed up something with the configuration or these snapshots are indeed the size of the whole VM and taking so much space. Thx
  20. B

    Storage replication

    I am referring to: Under node >> Storage select it, Summary. See attached. There are two VMs there both of size 42GB so the total storage for the two VMS is as it shows in the zpool list: root@pve01-nyc:~# zpool list NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT...