Hello, the 1Gbps embedded NIC is not cutting it anymore. I wanna migrate everything (network wise) in the PVE 7.4-20 and PBS 2.4-7 to a newly added Intel 2.5Gbps NIC in my HPE DL360 Gen9.
I asked DeepSeek for the steps, and I need human experts to recommend if the steps are indeed workable.
PS: I might use Bonding, and definitely won't use PCI passthrough.
Thank you very much!
conf
auto bond0
iface bond0 inet manual
bond-slaves eno1 enp4s0 # Old and new NICs
bond-miimon 100
bond-mode active-backup # Or balance-alb for load balancing
auto vmbr0
iface vmbr0 inet static
address 192.168.1.100/24
gateway 192.168.1.1
bridge-ports bond0
bridge-stp off
bridge-fd 0
Then restart networking:
bash
systemctl restart networking
I asked DeepSeek for the steps, and I need human experts to recommend if the steps are indeed workable.
PS: I might use Bonding, and definitely won't use PCI passthrough.
Thank you very much!
Migrating from Embedded NIC to 2.5Gbps Add-in NIC in HPE DL360 Gen9 (Proxmox)
This guide covers migrating your Proxmox VE network configuration, VMs, and containers from the embedded NIC to a new 2.5Gbps PCIe NIC while minimizing downtime.Step 1: Prepare the New NIC
- Install the 2.5Gbps NIC physically
- Power off the server.
- Insert the NIC into a PCIe slot.
- Boot up and verify detection:
bash
lspci | grep -i ethernet - Check if the driver is loaded (lspci -k).
- Identify the new NIC interface name
bash
ip a- Look for a new interface (e.g., enp4s0, ens1f0).
Step 2: Configure the New NIC in Proxmox
Option A: Manual Network Migration (Recommended for Minimal Downtime)
- Edit /etc/network/interfaces
bash
nano /etc/network/interfaces - Add the new NIC configuration (keeping the old one for fallback):
conf
auto enp4s0 # New 2.5Gbps NIC
iface enp4s0 inet manual
auto vmbr0
iface vmbr0 inet static
address 192.168.1.100/24
gateway 192.168.1.1
bridge-ports enp4s0 # Change from old NIC (e.g., eno1) to new NIC
bridge-stp off
bridge-fd 0 - Apply changes
bash
systemctl restart networking - Test connectivity
bash
ping 8.8.8.8
Option B: Failover Bonding (For Redundancy)
If you want to use both NICs (old + new) in a bond:conf
auto bond0
iface bond0 inet manual
bond-slaves eno1 enp4s0 # Old and new NICs
bond-miimon 100
bond-mode active-backup # Or balance-alb for load balancing
auto vmbr0
iface vmbr0 inet static
address 192.168.1.100/24
gateway 192.168.1.1
bridge-ports bond0
bridge-stp off
bridge-fd 0
Then restart networking:
bash
systemctl restart networking
Step 3: Update VM/Container Network Settings
- VMs/CTs using vmbr0 will automatically switch to the new NIC (no changes needed).
- If using PCIe passthrough, update the VM config:
bash
qm set <VMID> -net0 virtio,bridge=vmbr0