Search results

  1. A

    Non-optimal routing speeds with Proxmox 7

    Bonjour, we've been virtualizing mikrotik CHR routers on PROXMOX for several years now. In my opinion the performance limitation is the cost of processing packets twice and at kernel level in the host and guest network stacks. As you notice there is no pb to achieve 10Gbps with a single TCP...
  2. A

    Offloading emulation with DPUs

    Hi, we begin Bluefield2 DPU testing with proxmox. The goal is to move the vxlan/EVPN controlplane into the DPU and to accelerate the packet processing with hw capabilities First I'll try native HBN service , based on Cumulus network OS into a container Then I will test a custom debian system...
  3. A

    Slow network connection from VM1 to host2. Fast connection from VM1 on host1 to VM2 on host2, Fast connection from host2 to VM1.

    Recently, I have discovered an TSO issue on PVE7.0 Some tap interfaces does not have the TSO flap set To check if TCP segmentation offload is correctly set execute ethtool on tap iface ethtool -K tap***i*| grep offload
  4. A

    ZFS with Raid Controller

    Keroex, The best secure and stability way requieres that ZFS access directly to the disk On HPE Gen10 with P840 you can set some disk in passthrough mode and assign others to hw raid for example you can install Proxmox on hw raid (a mirror of 2 disk is fine) so the server survives if a disk die...
  5. A

    Network packet loss in high traffic VMs

    Hi, I am doing some qemu 7 tests , rx_queue_size=1024,tx_queue_size=1024 are present in vm cmdline In the guestOS only rx queue is increased, tx stay at 256 did you notice that too ? is this normal behavior ?
  6. A

    Network packet loss in high traffic VMs

    @mika, it is possible to change ring buffer on virtio-net vnic (I have not tested) this should logically reduce the packets loss during intensive udp traffic dont forgot to check others conditions, path L2MTU, VM tx_buffer (txqlen), VM scaling governor, hypervisor pci profile (max perf is...
  7. A

    how to reinstall node in cluster Two nodes with quorum device

    Hi, I have checked the delnode procedure and here is the results root@FLEXCLIPVE03:~# pvecm delnode FLEXITXPVE03 Could not kill node (error = CS_ERR_NOT_EXIST) Killing node 2 command 'corosync-cfgtool -k 2' failed: exit code 1 It seems that FLEXITXPVE03 node has been removed in other way...
  8. A

    how to reinstall node in cluster Two nodes with quorum device

    many thanks I will test check procedure next week
  9. A

    how to reinstall node in cluster Two nodes with quorum device

    OK I specify expected 1 after poweredoff the node ?
  10. A

    how to reinstall node in cluster Two nodes with quorum device

    Hi, @aaron I've tested the procedure and there is an inconsistency In that case, first remove the qdevice: pvecm qdevice remove Then check the pvecm status confirming that only 2 votes are expected at max Move all guests from the node that is to be reinstalled Remove the one node following the...
  11. A

    Bridge does not inherit MTU from ports in PVE 7 using ifupdown2

    @iniciaw what you describe is the expected behavior, and I have not notice any issue with bond if. For bridges interfaces PVE6 and PVE7 have different behavior. let me clarify this With same /etc/network/interfaces file , only physical nics and dummys nic have mtu 8950 specified On PVE6...
  12. A

    Bridge does not inherit MTU from ports in PVE 7 using ifupdown2

    Hi, I ve noticed same behavior on PVE7.0 nodes previous versions pve6 and pve5 running with ifupdown does not have the pb @spirit ,what conclusions did you make ?
  13. A

    [Tutorial] Run Open vSwitch (OVS) + DPDK on PVE 7.0

    Hello, very interesting post. I have tested PVE7.0 VXLAN-EVPN with sr-iov (not flexible and # of pci dev limited) and mlx5 VDPA solution (bugous when using many vhost-vpda dev). I never test the openvswitch + dpdk solution because vxlan-evpn implementation does not accept FRR as EVPN...
  14. A

    qmeventd send SIGKILL before vm shutdown there nics

    with a sleep30 you should notice after 5sec that qmeventd sends a SIGKILL to the qemu process and then invokes qm cleanup but this one doesn't call tap_unplug. (to display SIGKILL in logs you must set /etc/pve/.debug to 1) after a qm shutdown if qmeventd fire SIGKILL we expected that cleanup...
  15. A

    qmeventd send SIGKILL before vm shutdown there nics

    # the original line next if $opt !~ m/^net(\d)+$/; # we have corrected the line for testing next if $opt !~ m/^net(\d+)$/; The bug is the "+" character must be captured in parenthesis but it is placed after So the net label like net12 is captured as net1 (second digit is not taken)
  16. A

    qmeventd send SIGKILL before vm shutdown there nics

    I have just upoload the debug data you requested Did you try with a sleep 1 in bridge-down script ? It is not really related but we found a little bug that cause the cleanup procedure does not process net device correctly /usr/share/perl5/PVE/CLI/qm.pm +812 #next if $opt !~ m/^net(\d)+$/...
  17. A

    qmeventd send SIGKILL before vm shutdown there nics

    I reproduce the following behavior on a node in PVE version 7.2-11 Invoking a shutdown (from PVE WebUI) on a VM with 13 interfaces and qemu-guest-agent running and fully operationnal results in qmeventd fire SIGKILL. so it seems that qmeventd doesn't give the VM a chance to stop its network...
  18. A

    qmeventd send SIGKILL before vm shutdown there nics

    Hi, Thank you for this explaination. On my side I ll will investigate a little more to clarify my observations. In addition I m waiting for the check you will do
  19. A

    qmeventd send SIGKILL before vm shutdown there nics

    Hi, we have been using proxmox for many years. We have servers running different versions of PVE5.3, PVE6.2 and PVE7.0. We virtualize routers. This means that the VMs can have 10 to 20 nics. In this context we have noticed a problem after running the qm shutdown command the QEMU process receives...
  20. A

    link: host: 7 link: 0 is down

    Hi, Thx for your feedback. so on your side it was an external network problem. My nodes are located on differents DC but on same L2 network I investigate further

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!