Update:
I tried to block VRRP packets with the firewall from proxmox gui. I have the same packet drop, also without VRRP packets incoming on VM net. So it's not pfsense fault. I ran another tcpdump but there is not any relevant packet, traffic is normal (ssh + postgresql client).
Any ideas on...
Hi,
I'm experiencing a strange problem, not sure if it is some way related to Proxmox, but it started to happen this night after 5.4-13 upgrade on our nodes.
As I said, this night we have updated our nodes to the latest Proxmox 5.x version (5.4-13). After nodes update some monitoring systems...
Unfortunately is not working, I'm not able to reach VMs with VLAN aware flag.. but with bridge attached to tagged bond is working very well. Isn't that correct same way?
What about MTU? Do you have any idea on why we are having this bad result on 9000 mtu network? I made a ping test with
ping...
Solved! Dropped packets were due to Netgear LLDP and ISDP, for some reason they reached the VMs and a drop occured.
I have only one last problem to solve: on Ceph cluster network (where mtu is set to 9000, also on switch) performance are very bad with iperf 4.8gb/s, if I do the same test on...
I also add that I saw a large number of dropped packet in rx on each bond and underlying slaves (enoX interfaces). The strange thing is that the drops occured in the same identical number on the 2 bonds and on the 2 underlying interfaces (completely different vlans).
Also I have a drop in a rx...
Hi again,
I made this setup on my /etc/network/interfaces. I have seperated Ceph Public Net from Ceph Cluster Net because I would like to have a VM monitoring Ceph with Prometheus (on the public net). This is the reason why I created vmbr1060, in order to have a VM attached to that network...
What about MTU? I read the docs but I can't find any hint.
I've been advised to set MTU to 9000 on Ceph network because of better performance, but I have some questions. If I set the mtu (with mtu directive in /etc/network/interfaces) to 9000 on Ceph bond then I have to also set it on underlying...
I will reserve 2 NIC for that (one per switch with dual ring for Corosync).
At this point I have no idea on what network use for migration. I have 6 NICs (2x10Gb and 4x1 Gb), I will use the 2x10 Gb (one per switch) for Ceph, 2x1 Gb (also one per switch) for Corosync and 2x1 Gb (still one per...
Ok thanks.
One last question.
By adding one dual port 1G ethernet adapter per node we could have a dedicated network for Corosync in order to avoid all possible issues connected with cluster sync.
In that case, is it safe to keep cluster management network (access to Proxmox interface) as a VLAN...
We could. It's about complexity, we have two switch that are not stackable (natively speaking), so we should use MLAG, and we are not confident with that. But at this point we will try with LACP and MLAG.
I thought it was better in order to manage multiple VLANs on the same bridge (some VMs...
Hi,
we are planning a migration to a 3 nodes Proxmox cluster with Ceph.
We have 3 identicals servers with the following specs each:
Dell PowerEdge R640
Intel Xeon Silver 4110 2.1G, 8C/16T
RAM 64 GB
1x SSD 240 GB Intel S4600 for Proxmox OS
2x SSD 480 GB Samsung SM863 -> OSD (SSD Pool)
3x...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.