Search results

  1. S

    [SOLVED] VM multicast VRRP packets drop

    Update: I tried to block VRRP packets with the firewall from proxmox gui. I have the same packet drop, also without VRRP packets incoming on VM net. So it's not pfsense fault. I ran another tcpdump but there is not any relevant packet, traffic is normal (ssh + postgresql client). Any ideas on...
  2. S

    [SOLVED] VM multicast VRRP packets drop

    Hi, I'm experiencing a strange problem, not sure if it is some way related to Proxmox, but it started to happen this night after 5.4-13 upgrade on our nodes. As I said, this night we have updated our nodes to the latest Proxmox 5.x version (5.4-13). After nodes update some monitoring systems...
  3. S

    Hardware configuration e network setup

    Unfortunately is not working, I'm not able to reach VMs with VLAN aware flag.. but with bridge attached to tagged bond is working very well. Isn't that correct same way? What about MTU? Do you have any idea on why we are having this bad result on 9000 mtu network? I made a ping test with ping...
  4. S

    Hardware configuration e network setup

    Solved! Dropped packets were due to Netgear LLDP and ISDP, for some reason they reached the VMs and a drop occured. I have only one last problem to solve: on Ceph cluster network (where mtu is set to 9000, also on switch) performance are very bad with iperf 4.8gb/s, if I do the same test on...
  5. S

    Hardware configuration e network setup

    Drop in the vm seems to happen at this level: 1 drops at __netif_receive_skb_core+68f (0xffffffff8204f67f) I have not idea what is meaning...
  6. S

    Hardware configuration e network setup

    I also add that I saw a large number of dropped packet in rx on each bond and underlying slaves (enoX interfaces). The strange thing is that the drops occured in the same identical number on the 2 bonds and on the 2 underlying interfaces (completely different vlans). Also I have a drop in a rx...
  7. S

    Hardware configuration e network setup

    Hi again, I made this setup on my /etc/network/interfaces. I have seperated Ceph Public Net from Ceph Cluster Net because I would like to have a VM monitoring Ceph with Prometheus (on the public net). This is the reason why I created vmbr1060, in order to have a VM attached to that network...
  8. S

    Hardware configuration e network setup

    What about MTU? I read the docs but I can't find any hint. I've been advised to set MTU to 9000 on Ceph network because of better performance, but I have some questions. If I set the mtu (with mtu directive in /etc/network/interfaces) to 9000 on Ceph bond then I have to also set it on underlying...
  9. S

    Hardware configuration e network setup

    I will reserve 2 NIC for that (one per switch with dual ring for Corosync). At this point I have no idea on what network use for migration. I have 6 NICs (2x10Gb and 4x1 Gb), I will use the 2x10 Gb (one per switch) for Ceph, 2x1 Gb (also one per switch) for Corosync and 2x1 Gb (still one per...
  10. S

    Hardware configuration e network setup

    Ok thanks. One last question. By adding one dual port 1G ethernet adapter per node we could have a dedicated network for Corosync in order to avoid all possible issues connected with cluster sync. In that case, is it safe to keep cluster management network (access to Proxmox interface) as a VLAN...
  11. S

    Hardware configuration e network setup

    We could. It's about complexity, we have two switch that are not stackable (natively speaking), so we should use MLAG, and we are not confident with that. But at this point we will try with LACP and MLAG. I thought it was better in order to manage multiple VLANs on the same bridge (some VMs...
  12. S

    Hardware configuration e network setup

    Hi, we are planning a migration to a 3 nodes Proxmox cluster with Ceph. We have 3 identicals servers with the following specs each: Dell PowerEdge R640 Intel Xeon Silver 4110 2.1G, 8C/16T RAM 64 GB 1x SSD 240 GB Intel S4600 for Proxmox OS 2x SSD 480 GB Samsung SM863 -> OSD (SSD Pool) 3x...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!