Hi,
we are facing the same behavior for nics which use i40e module
WARNING: Link is up but PHY type 0xe is not recognized.
it seems to be a mis alignement between fw and driver versions but i not sure because not all my proxmox nodes have those logs
have you find a solution or workaround
Bonjour,
we've been virtualizing mikrotik CHR routers on PROXMOX for several years now. In my opinion the performance limitation is the cost of processing packets twice and at kernel level in the host and guest network stacks. As you notice there is no pb to achieve 10Gbps with a single TCP...
Hi,
we begin Bluefield2 DPU testing with proxmox.
The goal is to move the vxlan/EVPN controlplane into the DPU and to accelerate the packet processing with hw capabilities
First I'll try native HBN service , based on Cumulus network OS into a container
Then I will test a custom debian system...
Recently, I have discovered an TSO issue on PVE7.0
Some tap interfaces does not have the TSO flap set
To check if TCP segmentation offload is correctly set execute ethtool on tap iface
ethtool -K tap***i*| grep offload
Keroex,
The best secure and stability way requieres that ZFS access directly to the disk
On HPE Gen10 with P840 you can set some disk in passthrough mode and assign others to hw raid
for example you can install Proxmox on hw raid (a mirror of 2 disk is fine) so the server survives if a disk die...
Hi,
I am doing some qemu 7 tests , rx_queue_size=1024,tx_queue_size=1024 are present in vm cmdline
In the guestOS only rx queue is increased, tx stay at 256
did you notice that too ? is this normal behavior ?
@mika,
it is possible to change ring buffer on virtio-net vnic (I have not tested)
this should logically reduce the packets loss during intensive udp traffic
dont forgot to check others conditions, path L2MTU, VM tx_buffer (txqlen), VM scaling governor, hypervisor pci profile (max perf is...
Hi,
I have checked the delnode procedure and here is the results
root@FLEXCLIPVE03:~# pvecm delnode FLEXITXPVE03
Could not kill node (error = CS_ERR_NOT_EXIST)
Killing node 2
command 'corosync-cfgtool -k 2' failed: exit code 1
It seems that FLEXITXPVE03 node has been removed in other way...
Hi,
@aaron I've tested the procedure and there is an inconsistency
In that case, first remove the qdevice: pvecm qdevice remove
Then check the pvecm status confirming that only 2 votes are expected at max
Move all guests from the node that is to be reinstalled
Remove the one node following the...
@iniciaw what you describe is the expected behavior, and I have not notice any issue with bond if.
For bridges interfaces PVE6 and PVE7 have different behavior.
let me clarify this
With same /etc/network/interfaces file , only physical nics and dummys nic have mtu 8950 specified
On PVE6...
Hi,
I ve noticed same behavior on PVE7.0 nodes
previous versions pve6 and pve5 running with ifupdown does not have the pb
@spirit ,what conclusions did you make ?
Hello,
very interesting post.
I have tested PVE7.0 VXLAN-EVPN with sr-iov (not flexible and # of pci dev limited) and mlx5 VDPA solution (bugous when using many vhost-vpda dev). I never test the openvswitch + dpdk solution because vxlan-evpn implementation does not accept FRR as EVPN...
with a sleep30 you should notice after 5sec that qmeventd sends a SIGKILL to the qemu process and then invokes qm cleanup but this one doesn't call tap_unplug. (to display SIGKILL in logs you must set /etc/pve/.debug to 1)
after a qm shutdown if qmeventd fire SIGKILL we expected that cleanup...
# the original line
next if $opt !~ m/^net(\d)+$/;
# we have corrected the line for testing
next if $opt !~ m/^net(\d+)$/;
The bug is the "+" character must be captured in parenthesis but it is placed after
So the net label like net12 is captured as net1 (second digit is not taken)
I have just upoload the debug data you requested
Did you try with a sleep 1 in bridge-down script ?
It is not really related but we found a little bug that cause the cleanup procedure does not process net device correctly
/usr/share/perl5/PVE/CLI/qm.pm +812
#next if $opt !~ m/^net(\d)+$/...
I reproduce the following behavior on a node in PVE version 7.2-11
Invoking a shutdown (from PVE WebUI) on a VM with 13 interfaces and qemu-guest-agent running and fully operationnal results in qmeventd fire SIGKILL. so it seems that qmeventd doesn't give the VM a chance to stop its network...
Hi,
Thank you for this explaination.
On my side I ll will investigate a little more to clarify my observations.
In addition I m waiting for the check you will do
Hi,
we have been using proxmox for many years.
We have servers running different versions of PVE5.3, PVE6.2 and PVE7.0.
We virtualize routers. This means that the VMs can have 10 to 20 nics.
In this context we have noticed a problem
after running the qm shutdown command the QEMU process receives...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.