Search results

  1. S

    corosync show link flapping (down/up) about every 3-4 minutes, but switch shows no problem

    Just to make that clear again: the networt link is not saturated. Monitoring shows an average of around 400 MBit/s on a GBit interface. Which means it is quite far away from a bandwith problem. So the real question here is: why are packets lost at all? And still: how does corosync really "find...
  2. S

    corosync show link flapping (down/up) about every 3-4 minutes, but switch shows no problem

    kernel is 6.17.13-2-pve, exactly the one pve-enterprise installs. The modules is indeed available Thanks
  3. S

    corosync show link flapping (down/up) about every 3-4 minutes, but switch shows no problem

    I wanted to try the problem situation with a bbr congestion variant. But I found that the kernel delivered with proxmox does not supply this congestion protocol. Why is this?
  4. S

    corosync show link flapping (down/up) about every 3-4 minutes, but switch shows no problem

    That is not the complete truth. Look at this: --- 192.168.192.250 Ping-Statistiken --- 14000 Pakete übertragen, 13862 empfangen, 0.985714% packet loss, time 14133836ms rtt min/avg/max/mdev = 0.103/1.065/3.615/1.127 ms This is quite a long running ping during nfs load. If there was really heavy...
  5. S

    corosync show link flapping (down/up) about every 3-4 minutes, but switch shows no problem

    No, the fiber link is dedicated, the copper is also used for other purposes. I checked that in detail and had to find out that the latest kernel networking is not really as good as one might think - after all those years. If there is nfs traffic (ok, heavy nfs traffic) going on on the link ICMPs...
  6. S

    corosync show link flapping (down/up) about every 3-4 minutes, but switch shows no problem

    Hello all, I recently experience a problem with corosync showing link flapping, but it seems to me that these are really fake. Neither the corresponding switch shows a link problem, nor the kernels of the boxes (3-box cluster). I use a 10G fiber main links and 1G copper backup links. Flapping...
  7. S

    Mixed PVE8/PVE9 cluster - how stable is it?

    We could not use pve8 because of the old zfs version included there. We incorporated a zfs volume from another setup which only worked with the zfs version of pve9. Mount was not possible with pve8.
  8. S

    Mixed PVE8/PVE9 cluster - how stable is it?

    I would love to see some official statement on this question because absolutely everybody with a cluster runs into this while upgrading to pve9. Me I'd like to add a new pve9 node to an existing pve8 cluster ... Migration from pve8 guest to pve9 node is also an important point.
  9. S

    What is correct way to install ... intel-microcode ?

    there is no good way for me to reproduce that situation, as I already converted the setup to 9 and turned the list files into sources as recommended. Since this was the primary intent I only tried to follow the messages from pve8to9 as close as possible, which brought me here.
  10. S

    What is correct way to install ... intel-microcode ?

    The thing is: on my side adding it on every entry with debian.org did not work correctly. But when I added it only on the first occurrence (just like example above) it did work.
  11. S

    What is correct way to install ... intel-microcode ?

    Thank you for resurrection. Why can't the wiki state where exactly the "non-free-firmware" has to be added (for non-debianists) ?
  12. S

    Consolidating a former ZFS Fileserver with Promox

    Hello all, I will shortly try to remove an old NFS server with ZFS filebase and replace it with a new proxmox system (re-using the ZFS HDDs/filebase). I am currently thinking of two options: 1) Put ZFS completely on the host and make the NFS server a VM with a virtual disk being the ZFS pool...
  13. S

    [TUTORIAL] virtiofsd in PVE 8.0.x

    I can tell you, using the script made by Drallas from above in this thread got me a working setup. At least in regard of the virtiofs part. There is a "my guide" link above, walk to that. It installs a hookscript, and you have to reboot the vm twice probably, but then it should work.
  14. S

    USB3 passthrough to vm with 10G

    Thank you for thinking about the problem. unfortunately 9p is no solution either because it has the same problem with export via nfs than the thought-to-be-replacement virtiofs. In the longrun I will probably end up with reformatting the HDs and using ZFS instead. But at this point I simply have...
  15. S

    USB3 passthrough to vm with 10G

    Hm, unfortunately the USB passthrough was not my first choice for solving the setup in question. It is only a currently working one. In fact I would have liked to passthrough the HDs as virtiofs. But since this cannot be easily exported via nfs, and nobody could tell me so far how to make a...
  16. S

    USB3 passthrough to vm with 10G

    No, have not so far. But that would not be very useful anyway as there are only HDs connected, and they lack the necessary bandwidth for the test. But I do think that passthrough USB ports should come out on the vm exactly as they were on the host, else it is no real passthrough.
  17. S

    USB3 passthrough to vm with 10G

    Hello all, I try to passthrough some USB ports from host to linux vm and found that they all come out with 5000M speed and not 10G, although they show 10G when used on the host. Is there a way to change this behaviour? Has anybody seen 10G USB devices inside a vm? Thank you for comments.
  18. S

    virtiofs with nfs ?

    Hello, thank you for posting the content from red hat, only I doubt the given cause. Clearly NFS V2/3 use persistent file handles. But V4 should be able to provide volatile file handles. The problem I have is that I cannot find any docs for linux how you force NFSv4 to always give volatile...
  19. S

    virtiofs with nfs ?

    This seems to be not true. At least this is what https://access.redhat.com/solutions/7000411 seems to say, as it marks a "solution verified". I cannot tell you how though, because the red hat content is closed...