I am having problems with pings to disconnected machines on my PVE host: Sometimes, instead of resulting in "Destination Host Unreachable", ping responses are recognized from wrong IPs:
PING octopus.example.com (192.168.1.38) 56(84) bytes of data.
64 bytes from shark.example.com (192.168.5.14)...
That's not related to an update from PVE7 to PVE8, is it?
I did the upgrade here, and performance is as before (but without a separate kernel module installed now in PVE8:
iperf -c 192.168.1.6 -P 8
------------------------------------------------------------
Client connecting to 192.168.1.6...
On 6, I needed mellanox drivers for stability. Otherwise, under higher load the card was doing strange things, like taking a coffee break. It hasn't done that, yet, with pve8 and the kernel driver. <knockonwood>
I'm trying to pave the way for migrating Proxmox 7 to 8. One warning thrown by pve7to8 is about an installed dkms module - which is version 4.18.0 of the mellanox drivers, as previously, my ConnectX-4 was not working reliably. I googled and found that this module will probably cause issues...
Yep, I did this on my Cisco switch just in the same way that I have configured other bonded trunks on that thing.
But, I did just find the problem: It was a simple typing error in the config. Looks like the network stack does not complain if you miss a hyphen in the right place... :rolleyes...
I have the same problems with a vlan aware bridge... The host can only ping itself, no VMs, not the switch it is connected to, nothing.
bond0: flags=5187<UP,BROADCAST,RUNNING,MASTER,MULTICAST> mtu 1500
ether 1c:34:da:7f:b1:52 txqueuelen 1000 (Ethernet)
RX packets 6422 bytes...
Thanks all for the help. I did fiddle around with this quite a bit. I can get things to work up to the point that I create a Linux bond and bridge it:
# loopback interface
auto lo
iface lo inet loopback
# physical interfaces
iface enp193s0f0np0 inet manual
iface enp193s0f1np1 inet manual...
Thanks, sounds like a plan.
It's a bit disappointing that I need separate ports for each VLAN. Looks like with the vlan-aware linux bridge, there is a way to do this without a "device" per VLAN as described here.
But thanks again. Will try both ways and see how it works. (I've read somewhere...
Yes, thanks for reminding me that I'm getting old. :)
New version:
auto lo
# loopback interface
iface lo inet loopback
# bond
auto bond0
iface bond0 inet manual
ovs_bridge vmbr0
ovs_type OVSBond
ovs_bonds enp193s0f0np0 enp193s0f1np1
ovs_options...
I'm planning to move my physical firewall into a proxmox VM. For this purpose, I need to "upgrade" my network config. Currently, Proxmox is connected to an access port on my switch. In the new config, proxmox shall be getting all VLANs for passthrough in an lacp trunk port to one VM.
Current...
TLDR of the below: Manual install of the mellanox driver for the pve kernel worked. Not sure I can remove the standard linux headers, though, as this will also remove other packages which may be required for DKMS?
Long version:
It seems what has happened is that I needed updated mellanox...
Thanks, so I have the pve kernels installed by default. Not sure where the old linux 5.10.0 kernel comes from. It's never booted.
As mentioned, after the error occured I did install pve-headers. Should I now
apt -f install pve-kernel-5.15.74-1-pve?
apt -f install <some meta package for pve...
Hi, I just updated my promox host after quite some time of it running stable and me traveling (so better not touch a running system). During update, I saw the following error:
Examining /etc/kernel/postinst.d.
run-parts: executing /etc/kernel/postinst.d/apt-auto-removal 5.15.74-1-pve...
Hi, I'm currently running RAIDs on my Proxmox server, with ZFS, shared via NFS with VMs and other machines on the network. I am now looking to build another single SSD into the system which I would also like to share via NFS - but in EXT4 format, as I do not see huge benefit in running single...
So I still have an old XP VM with which I need to share a couple of file folders. With PVE 6, I've been using samba on the Proxmox machine to share folders as SMB1 with that VM. Now, with PVE 7 and Debian Bullseye, Samba has been promoted to/above 4.13, which appears to have removed support for...
Thanks, it looks to me as if the content of /proc/net/bonding/bond0 is ok…?
Ethernet Channel Bonding Driver: v5.13.19-2-pve
Bonding Mode: IEEE 802.3ad Dynamic link aggregation
Transmit Hash Policy: layer2+3 (2)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0...
Hi, I'm trying to connect my Promox machine via Link Aggregation to a HP1810 switch. For this purpose, I have configured two ports as trunk (LACP Active) in the web interface of the switch and link aggregation on Proxmox. netstat -i suggests that this is working:
Iface MTU RX-OK RX-ERR...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.