Recent content by sherminator

  1. S

    Slow memory leak in 6.8.12-13-pve

    Side note from a not affected setup: Our 3 node cluster (PVE/Ceph) is running PVE 8.4.x, last weekend (the gap in the chart below) we updated from kernel 6.8.12-11 to 6.8.12-13. Our Ceph network is built on Broadcom P425G NICs. Maybe that helps a little bit.
  2. S

    Proxmox Probleme Windows Gäste im Bereich Netzwerk

    Hi Markus, so ganz entfernt erinnert mich Deine Beschreibung an die Probleme, die wir zu Beginn unserer aktuellen PVE-Hardwaregeneration hatten. Wir haben an Windows-VMs, auf denen Anwendungen gestartet wurden, die auf der Dateifreigabe einer anderen VM liegen (so ist das halt mit unserem...
  3. S

    UPS Help! Power cut already 2 times

    I can recommend CyberPower. We run them in a couple of server and networking racks - no issues so far.
  4. S

    Accessing internet from Proxmox

    To understand your setting better: The connection from your PC to the Proxmox WebGUI is working? The OPNSense VM has two virtual NICs? Its "WAN" is connected to your WAN bridge, and its "LAN" is connected to... which bridge?
  5. S

    Netzwerkkarte wird nicht erkannt in der GUI

    Hi, ip addr sollte Dir alle NICs anzeigen, die das Betriebssystem erkannt hat.
  6. S

    [TUTORIAL] Broadcom NICs down after PVE 8.2 (Kernel 6.8)

    I would like to share my today's observations with this: I just updated some P425G and after a server reboot everything looks as expected: Active Package version : 230.1.116.0 Package version on NVM : 230.1.116.0 Firmware version : 230.0.156.0 But on a P225G a reboot...
  7. S

    Many TCP Retransmissions and TCP Dup ACKs: Wrong link aggregation configuration?

    Thanks for your reply! Hm, (R)STP is configured on all switches, and referring to our bandwidth monitoring there is no loop between the switches. What can I do to debug this? But I feel this is not a Proxmox issue anymore... o_O
  8. S

    Many TCP Retransmissions and TCP Dup ACKs: Wrong link aggregation configuration?

    Hi there, on our three node Proxmox/Ceph cluster we discovered many of the above TCP errors. We tracked it down to: Only outgoing traffic from a VM to any destination which is not on the same Proxmox node is affected. Each node is connected via 2x 10G to a switch. The related network...
  9. S

    Proxmox 8 crashing with error - some VM's see carry on running

    did I get this right? Proxmox VE is running, VMs are running, but you get these errors and can't connect via SSH ("putty") anymore? My first guess is a faulty system disk. Could you describe your disk setup?
  10. S

    [TUTORIAL] Broadcom NICs down after PVE 8.2 (Kernel 6.8)

    I would like to share another issue which in our case was solved by firmware updating Broadcom NICs. We have two servers with P425G NICs and SFP28 25G transceivers, the "300m" edition. The OM4 path between these two servers is longer than 300 meters. So the expected behaviour is that they...
  11. S

    [SOLVED] Disabling TLS 1.0 and 1.1

    Proxmox support, excellent as always, has solved the problem: We don't have to use smtpd_tls_mandatory_protocols. but smtpd_tls_protocols, then it works as desired.
  12. S

    [SOLVED] Disabling TLS 1.0 and 1.1

    Hi there, we are trying to disable TLS 1.0 and 1.1 for our PMG/Postfix. Therefore we put smtpd_tls_mandatory_protocols = >=TLSv1.2 to our /etc/pmg/templates/main.cf.in and commit the change via pmgconfig sync --restart 1. Then we tested it from another machine with openssl s_client -connect...
  13. S

    Full mesh as failover cluster network

    sorry, I didn't get it yet. ens18 is a physical connection to node 2. Node 3 can't communicate with node 1 via this connection. Do I overthink this? Should I just set the first mesh interface as the failover link for the cluster? edit: ah, at the end only the IP address matters for the cluster...
  14. S

    Full mesh as failover cluster network

    Thanks for your quick reply! I didn't create the cluster yet. Full mesh (at least in my case) means that there is one link to "node 2" and one link to "node 3", both with the same ip address, like this: # Connected to Node2 (.51) auto ens18 iface ens18 inet static address 10.15.15.50...
  15. S

    Full mesh as failover cluster network

    Hi there, I'm building a new 3 node PVE cluster. I already have a full mesh prepared for live migrations, Is it possible to use this full mesh as a failover network for the cluster itself? In the WebUI at "Create Cluster" it doesn't seem to be possible to add both mesh links. Thanks and greets!