Search results

  1. B

    Moving pve cluster to a new switch

    Yes , VoIP hosting , some clients have 24/7 service. Uptime is paramount. Thank you all for advise. Proxmox has been a stable environment for us for over 10 years now starting with 3.x version and updating over the years.
  2. B

    Moving pve cluster to a new switch

    Moving the PVE cluster network one node at the time had no effect on overall cluster or ceph in this setup, Ceph did not even notice. Proxmox did notice, I saw moving nodes offline but I disabled HA on all VMs for the time being.Whole operation completed successfully. Thx.
  3. B

    Moving pve cluster to a new switch

    Yes, I see it gets complicated. I have 6 nodes Proxmox hypervisor and a cluster with 4 nodes Ceph running on Proxmox and another 4 nodes Ceph cluster running on Proxmox. So total of 3 separate clusters with the Proxmox hepervisor only running the VMs and the remaining two nodes only use Ceph...
  4. B

    Moving pve cluster to a new switch

    yes, but I have 3 networks and separate interface for: 1. PVE cluster - keeping PVE quorum 2. Ceph public - facing the Proxmox hypervisors 3. Ceph privet - syncking storage accros Ceph servers I am running ceph on separate servers but ceph is installed on Proxmox that is working just to...
  5. B

    Moving pve cluster to a new switch

    Is ceph quorum using pvx cluster network ? I though that it is using ceph private network along with data syncking.
  6. B

    Moving pve cluster to a new switch

    I have a cluster in production that I need to move to a new switch. We are using MCLAG redundant interfaces and need to move 6 nodes from on switch cluster to the other. That should not take long but I was wondering if I should execute on each node pvecm expected -1 before I do that, any advice...
  7. B

    Test, nested Ceph cluster SSD latency up to 2000ms

    I configured a nested Proxmox Ceph cluster (3 nodes) for testing on my Proxmox server. 3 nodes, plenty of RAM, CPU power etc. I used 3 good SAS SSDs , 1 per virtual machine. Currently there is nothing else running on this Proxmox serer. All networking works fine , I have 2 x 10Gbps ports and...
  8. B

    [DMA Read NO_PASID] Request device [00:02.0] fault addr 0xcce25000 [fault reason 0x05] PTE Write access is not set

    Experienced it today with my 10years old R720 server taht I use for testing. DMAR DRHD Request device [03:00.0] fault addr ... fault reason PTE read access is not set. I had to replace failing Perc H710P mini and took laying around H310 mini, the server recognized it and did not make a fuss...
  9. B

    PBS backing up relatively big virtual machines

    I am using PBS to backup about 300 systems running on PVE then sync it to a backup location to another PBS. It works GREAT , have been doing it for last year with daily and weekly backups, with no issues. This VMs are relatively small in size, the cumulative size is about 13-14TB. I have a VM...
  10. B

    Adding a NIC post install advice

    Adding network cards should not matter in your case. You can run your VMs from the 1Gpps interfaces for now. Then you will add NICs and simply change the network settings on a VM. As for the cluster , 1 Gbps is more then enough.
  11. B

    Strange network behavior on one port in a bond

    The issue has been fixed. The Ceph cluster network even with 4 nodes needs to use hash 3+4.
  12. B

    VNC argument/port question

    I have the VNC working with args: -vnc 0.0.0.0:xx added to the config file. What happens if the VM is migrated to another node with another VM using the same port ? Is there a wider range of ports that can be used instead of 59xx or perhaps extend that range so I can add it to more VMs from...
  13. B

    Strange network behavior on one port in a bond

    Is there anything else that can be checked ? I cannot do iperf for now. I was able to apply changes to a description of an interface which restarted the network on node1. Now the traffic outbound is using the enp4s0f0 and the enp5s0f0 is not passing any traffic out (outbound) , both interfaces...
  14. B

    Strange network behavior on one port in a bond

    Please note , the traffic TO node1 from the switch (Incoming) is fine and working across both interfaces enp4s0f0 and enp5s0f0. The problem is with sending from node1 (Outgoing per nload) , only enp5s0f0 is sending unless it is unplugged and then enp4s0f0 takes over otherwise the enp4s0f0 is...
  15. B

    Strange network behavior on one port in a bond

    Cannot test at the moment as the node is in production with 4 node cluster having about 200 VMs on Ceph storage. Don't really have a maintenance window scheduled for now. It is working on the second slave interface. I looked at the /proc/net/bonding/bond1 across the 4 nodes and they look the...
  16. B

    Strange network behavior on one port in a bond

    Hi, I am testing by uploading the VMs to the storage and generating traffic that way which is when I took the reading from nload, I can see clearly that the presence on all other ports and or servers for outgoing traffic and absence of that traffic on that particular interface in outgoing...
  17. B

    Strange network behavior on one port in a bond

    proxmox-ve: 7.4-1 ceph: 17.2.6-pve1 I have 4 node Ceph cluster using Proxmox as a host. Public and Private networks are separated using 2 x 10Gbps ports each (2 cards per node , 4 ports total). All nodes are setup in exactly the same way. Here is an example of Ceph Private config: auto...
  18. B

    Ceph PG #

    Thank you for you help and warning, I will keep an eye on it.
  19. B

    Ceph PG #

    Thank you , so the 1024 PGs would be preferred being the calculated value with the ceph pg calculator ? I am warming up to the autoscaler , I have it running on a smaller cluster and it is just working I guess. I am just not sure how it is making the adjustments and how they affect the...
  20. B

    Ceph PG #

    Proxmox 7.4.16. I am getting confused by all the numbers. I have 24 OSDs , SSD 1.46TB across 4 nodes, 3 replicas , total size of the pool 12TB and it is going to be 80-85% full. I did the calculation from ceph calc and it gets me 800 , rounded 1024 PG which is also the number that Ceph...