Search results

  1. ThinkAgain

    Help please: Debugging ping responses from wrong addresses

    I am having problems with pings to disconnected machines on my PVE host: Sometimes, instead of resulting in "Destination Host Unreachable", ping responses are recognized from wrong IPs: PING octopus.example.com (192.168.1.38) 56(84) bytes of data. 64 bytes from shark.example.com (192.168.5.14)...
  2. ThinkAgain

    Upgrade 7 to 8, Connect-4 dkms module installed

    That's not related to an update from PVE7 to PVE8, is it? I did the upgrade here, and performance is as before (but without a separate kernel module installed now in PVE8: iperf -c 192.168.1.6 -P 8 ------------------------------------------------------------ Client connecting to 192.168.1.6...
  3. ThinkAgain

    Upgrade 7 to 8, Connect-4 dkms module installed

    On 6, I needed mellanox drivers for stability. Otherwise, under higher load the card was doing strange things, like taking a coffee break. It hasn't done that, yet, with pve8 and the kernel driver. <knockonwood>
  4. ThinkAgain

    Upgrade 7 to 8, Connect-4 dkms module installed

    Looks better here: [ 10.320111] mlx5_core 0000:c1:00.0: firmware version: 14.32.1010 [ 10.320160] mlx5_core 0000:c1:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) [ 10.625522] mlx5_core 0000:c1:00.0: E-Switch: Total vports 10, per vport: max uc(128) max mc(2048) [...
  5. ThinkAgain

    Upgrade 7 to 8, Connect-4 dkms module installed

    Didn't get a reply and gave it a try. Currently running MT27710 under pve8 with the 6.5.11-7 kernel (enterprise repo). Which issues are you seeing?
  6. ThinkAgain

    Upgrade 7 to 8, Connect-4 dkms module installed

    I'm trying to pave the way for migrating Proxmox 7 to 8. One warning thrown by pve7to8 is about an installed dkms module - which is version 4.18.0 of the mellanox drivers, as previously, my ConnectX-4 was not working reliably. I googled and found that this module will probably cause issues...
  7. ThinkAgain

    migrating from simple network config to OpenVSwitch

    Yep, I did this on my Cisco switch just in the same way that I have configured other bonded trunks on that thing. But, I did just find the problem: It was a simple typing error in the config. Looks like the network stack does not complain if you miss a hyphen in the right place... :rolleyes...
  8. ThinkAgain

    migrating from simple network config to OpenVSwitch

    I have the same problems with a vlan aware bridge... The host can only ping itself, no VMs, not the switch it is connected to, nothing. bond0: flags=5187<UP,BROADCAST,RUNNING,MASTER,MULTICAST> mtu 1500 ether 1c:34:da:7f:b1:52 txqueuelen 1000 (Ethernet) RX packets 6422 bytes...
  9. ThinkAgain

    migrating from simple network config to OpenVSwitch

    Thanks all for the help. I did fiddle around with this quite a bit. I can get things to work up to the point that I create a Linux bond and bridge it: # loopback interface auto lo iface lo inet loopback # physical interfaces iface enp193s0f0np0 inet manual iface enp193s0f1np1 inet manual...
  10. ThinkAgain

    migrating from simple network config to OpenVSwitch

    Thanks, sounds like a plan. It's a bit disappointing that I need separate ports for each VLAN. Looks like with the vlan-aware linux bridge, there is a way to do this without a "device" per VLAN as described here. But thanks again. Will try both ways and see how it works. (I've read somewhere...
  11. ThinkAgain

    migrating from simple network config to OpenVSwitch

    Yes, thanks for reminding me that I'm getting old. :) New version: auto lo # loopback interface iface lo inet loopback # bond auto bond0 iface bond0 inet manual ovs_bridge vmbr0 ovs_type OVSBond ovs_bonds enp193s0f0np0 enp193s0f1np1 ovs_options...
  12. ThinkAgain

    migrating from simple network config to OpenVSwitch

    I'm planning to move my physical firewall into a proxmox VM. For this purpose, I need to "upgrade" my network config. Currently, Proxmox is connected to an access port on my switch. In the new config, proxmox shall be getting all VLANs for passthrough in an lacp trunk port to one VM. Current...
  13. ThinkAgain

    Headers error when updating

    TLDR of the below: Manual install of the mellanox driver for the pve kernel worked. Not sure I can remove the standard linux headers, though, as this will also remove other packages which may be required for DKMS? Long version: It seems what has happened is that I needed updated mellanox...
  14. ThinkAgain

    Headers error when updating

    Thanks, so I have the pve kernels installed by default. Not sure where the old linux 5.10.0 kernel comes from. It's never booted. As mentioned, after the error occured I did install pve-headers. Should I now apt -f install pve-kernel-5.15.74-1-pve? apt -f install <some meta package for pve...
  15. ThinkAgain

    Headers error when updating

    Hi, I just updated my promox host after quite some time of it running stable and me traveling (so better not touch a running system). During update, I saw the following error: Examining /etc/kernel/postinst.d. run-parts: executing /etc/kernel/postinst.d/apt-auto-removal 5.15.74-1-pve...
  16. ThinkAgain

    ZFS and EXT4 NFS sharing in parallel?

    Hi, I'm currently running RAIDs on my Proxmox server, with ZFS, shared via NFS with VMs and other machines on the network. I am now looking to build another single SSD into the system which I would also like to share via NFS - but in EXT4 format, as I do not see huge benefit in running single...
  17. ThinkAgain

    ACME Let's Encrypt und DNS bei Selfhost

    Hi Marvin, super Einsatz! Hast Du Dein Skript auch schon an letsencrypt geschickt, damit es als API Skript dort aufgenommen werden kann?
  18. ThinkAgain

    PVE 7.1: SMB1 / sharing with WinXP VM

    So I still have an old XP VM with which I need to share a couple of file folders. With PVE 6, I've been using samba on the Proxmox machine to share folders as SMB1 with that VM. Now, with PVE 7 and Debian Bullseye, Samba has been promoted to/above 4.13, which appears to have removed support for...
  19. ThinkAgain

    LAG, HP1810 Switch

    Thanks, it looks to me as if the content of /proc/net/bonding/bond0 is ok…? Ethernet Channel Bonding Driver: v5.13.19-2-pve Bonding Mode: IEEE 802.3ad Dynamic link aggregation Transmit Hash Policy: layer2+3 (2) MII Status: up MII Polling Interval (ms): 100 Up Delay (ms): 0 Down Delay (ms): 0...
  20. ThinkAgain

    LAG, HP1810 Switch

    Hi, I'm trying to connect my Promox machine via Link Aggregation to a HP1810 switch. For this purpose, I have configured two ports as trunk (LACP Active) in the web interface of the switch and link aggregation on Proxmox. netstat -i suggests that this is working: Iface MTU RX-OK RX-ERR...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!