Recent content by skraw

  1. S

    corosync show link flapping (down/up) about every 3-4 minutes, but switch shows no problem

    No, I don't have a fine-grained monitoring, and really, this is not exactly the point. I am very sure that all your working nodes are built on an empty network, just like the proxmox doc for corosync says. The thing is this: if I have even an 5-min-average throughput of about 400 MBit, a GBit...
  2. S

    corosync show link flapping (down/up) about every 3-4 minutes, but switch shows no problem

    Just to make that clear again: the networt link is not saturated. Monitoring shows an average of around 400 MBit/s on a GBit interface. Which means it is quite far away from a bandwith problem. So the real question here is: why are packets lost at all? And still: how does corosync really "find...
  3. S

    corosync show link flapping (down/up) about every 3-4 minutes, but switch shows no problem

    kernel is 6.17.13-2-pve, exactly the one pve-enterprise installs. The modules is indeed available Thanks
  4. S

    corosync show link flapping (down/up) about every 3-4 minutes, but switch shows no problem

    I wanted to try the problem situation with a bbr congestion variant. But I found that the kernel delivered with proxmox does not supply this congestion protocol. Why is this?
  5. S

    corosync show link flapping (down/up) about every 3-4 minutes, but switch shows no problem

    That is not the complete truth. Look at this: --- 192.168.192.250 Ping-Statistiken --- 14000 Pakete übertragen, 13862 empfangen, 0.985714% packet loss, time 14133836ms rtt min/avg/max/mdev = 0.103/1.065/3.615/1.127 ms This is quite a long running ping during nfs load. If there was really heavy...
  6. S

    corosync show link flapping (down/up) about every 3-4 minutes, but switch shows no problem

    No, the fiber link is dedicated, the copper is also used for other purposes. I checked that in detail and had to find out that the latest kernel networking is not really as good as one might think - after all those years. If there is nfs traffic (ok, heavy nfs traffic) going on on the link ICMPs...
  7. S

    corosync show link flapping (down/up) about every 3-4 minutes, but switch shows no problem

    Hello all, I recently experience a problem with corosync showing link flapping, but it seems to me that these are really fake. Neither the corresponding switch shows a link problem, nor the kernels of the boxes (3-box cluster). I use a 10G fiber main links and 1G copper backup links. Flapping...
  8. S

    Mixed PVE8/PVE9 cluster - how stable is it?

    We could not use pve8 because of the old zfs version included there. We incorporated a zfs volume from another setup which only worked with the zfs version of pve9. Mount was not possible with pve8.
  9. S

    Mixed PVE8/PVE9 cluster - how stable is it?

    I would love to see some official statement on this question because absolutely everybody with a cluster runs into this while upgrading to pve9. Me I'd like to add a new pve9 node to an existing pve8 cluster ... Migration from pve8 guest to pve9 node is also an important point.
  10. S

    What is correct way to install ... intel-microcode ?

    there is no good way for me to reproduce that situation, as I already converted the setup to 9 and turned the list files into sources as recommended. Since this was the primary intent I only tried to follow the messages from pve8to9 as close as possible, which brought me here.
  11. S

    What is correct way to install ... intel-microcode ?

    The thing is: on my side adding it on every entry with debian.org did not work correctly. But when I added it only on the first occurrence (just like example above) it did work.
  12. S

    What is correct way to install ... intel-microcode ?

    Thank you for resurrection. Why can't the wiki state where exactly the "non-free-firmware" has to be added (for non-debianists) ?
  13. S

    Consolidating a former ZFS Fileserver with Promox

    Hello all, I will shortly try to remove an old NFS server with ZFS filebase and replace it with a new proxmox system (re-using the ZFS HDDs/filebase). I am currently thinking of two options: 1) Put ZFS completely on the host and make the NFS server a VM with a virtual disk being the ZFS pool...
  14. S

    [TUTORIAL] virtiofsd in PVE 8.0.x

    I can tell you, using the script made by Drallas from above in this thread got me a working setup. At least in regard of the virtiofs part. There is a "my guide" link above, walk to that. It installs a hookscript, and you have to reboot the vm twice probably, but then it should work.