Search results

  1. B

    changing the gateway to non default network

    We have been using the default gateway on proxmox the way it was setup during the installation , meaning being on the cluster network. We want to move it to an 10 Gbps interface to have the PVE VM backup work faster and not possibly interfere with the cluster network. I saw a post here that...
  2. B

    Performance PVE 5.3.1 vs. 5.4.1

    I saw a post shortly after 5.4.1 was released (which I cannot find now) that there was a significant performance degradation for VMs after updating to 5.4.1. I am planing to update from 5.3.1 and our cluster does not suffer any performance issues now. Is there anything in particular in 5.4.x...
  3. B

    vzdump backup network

    What is the network for vzdump or maybe it would be better to ask how to define that network on Proxmox. I have 3 networks on my Proxmox, running: 1. for PVE cluster (1Gbps speed link agregation) 2. for hosting VMs (1Gbps speed link agregation) 3. for Ceph public (10Gbps speed link...
  4. B

    9000 MTU size on 1GB interface

    Let me just clarify I am using Proxmox backup to backup VMs to a external NFS storage. I guess I am using the vmbr0 for backup as "default" network which is also my cluster/corosync network , I was not aware that backup (theProxmox backup) network assignment can be changed. I dont see any type...
  5. B

    Changing a network card

    Thx - will give it a try...
  6. B

    9000 MTU size on 1GB interface

    W have 3 networks with 3 separate dual port for each on bonded interfaces: 1. PVE Cluster 1Gbps, 2. Client facing network for VMs 1Gbps 3. Ceph Public - 10 Gbps I do have 9000 MTU on the 10Gbps connection for Ceph but my question is really if it makes sense to put the 9000 MTU on the 1 Gbps...
  7. B

    Changing a network card

    huh... , we broke a port on the card by pulling the cable thinking it was unplugged already just stuck, it was stuck because it was not unplugged. It is working and we just made sure this cable is not moving with some tape :p... We want to replace it anyway just in case somebody pulls on it...
  8. B

    Moving cluster to new location

    hi udo, We are using ceph but moving all local hard drives to move the ceph servers as well ahead of time. I already did the expected -1 and the separated node is back up and operational. All seems to be working now in both locations. From the original location we will be moving 3 nodes at...
  9. B

    Moving cluster to new location

    Again I am moving our 4 node cluster to a new location. I though I could just move one node (node4) to the new location and slowly move the VMs over a period of two weeks to that node. Then move the remaining 3 nodes and they would resync. I am worried about the resyncing, I see the from teh...
  10. B

    Moving cluster to new location

    Thanks Kalus, that is what I though as well. Anybody else want to chip in ? Any words of wisdom or "last famous words" ...before the cluster crashes :-) Thanks for any advice
  11. B

    Moving cluster to new location

    We will be moving everything so same subnets/internal ips and relatively small latency >10ms only public ips will change. The reason why we are moving is that the internet in our current datacenter is flaky and goes down more often that I would like to see, it is usually for a very short period...
  12. B

    Moving cluster to new location

    I am moving a cluster of 4 nodes to a new location. No need to work about the external storage as I can fit all the VMs on local drives. I was thinking to move one node and do expected 1 so it can work by itself in teh new location and the remaining 3 nodes would work in teh old location then...
  13. B

    Wrong stats for memory and disk usage

    As only the delta is synced to the standby node I did not thought that it will take the hole disk space twice... hmm maybe that is the snapshot that is being synced to the target ?
  14. B

    Wrong stats for memory and disk usage

    dcsapak wrote: zfs takes half of your memory by default - is this really happening ? I know zfs uses a lot of ram but half of what you have or you were just referring to Tomx1's particular case. In my case storage replication takes about double the size of replicated VM - see this post...
  15. B

    JBOD for zfs

    Is the cache the only issue ? I should have started with it but I looked at the Perc h730p documentation and found that there is an option to disable cache for non-raid disks. It is under advanced controller properties. Has anybody done that successfully and can share the results (meaning...
  16. B

    Storage replication

    anybody with similar issue or can confirm ? thx
  17. B

    JBOD for zfs

    I have a PVE installed on RAID 1 on Perc h730p - it is not feasible to move it as it is in production with too many VMs etc. I want to use storage replication so I need zfs which does not like RAID controllers in general - that is why I found doing some digging and on this forum. The only option...
  18. B

    Storage replication

    Anybody on this ? can anybody confirm or compare their snapshot size to mine ? Maybe it is the way it should be. Again the "issue" is that after enabling storage replication the size on the drive for a VM doubles on the target drive as well as on the drive that the VM resides primarily. Thank you
  19. B

    Storage replication

    It seems like the storage replication is doing this, if I remove the storage replication from VM 205 which resides on node 2 (pve02) with target of pve01, it shows: root@pve02-nyc:~# zfs list NAME USED AVAIL REFER MOUNTPOINT zfspool 121G 239G 96K...
  20. B

    Storage replication

    I used mirror 1 to create it. Just so you know, this does not happen until I enable Storage Replication. If storage replication is NOT enabled I can see normal (the true size) of the virtual machine 42GB (it reads a little bit more but it is around 42GB as opposed to twice as big). EDIT - I...