Can you change interface but keep the same IP and not break things?

Jan 1, 2023
47
4
8
Portland, OR
If so, what's the best way to do that? If I need to take the host down, that's not a problem.

I'd like to switch from a 10GBase-T to DAC interface.

I've heard that changing IPs for your host isn't trivial, but I would forego that.
 
Can you change interface but keep the same IP and not break things?
Sure, why not? Of course both NICs have to be connected to the very same network. Only the MAC will change, so if you have a router/firewall with MAC-rules you should check those first.

For me it would go like: make a backup of /etc/network/interfaces; inside that file edit the "bridge-ports" of the correct "vmbrX" to reflect the desired change. Run "ifreload -a". If that failes run "systemctl restart networking.service" instead.

Sometimes simple edits fail and break the system. When I do something like this I make sure to have access to a console either physically or by Drac/Ipmi/whatever...

Edit: changing the name (of a node) requires some other steps and is considered problematic...
 
Last edited:
  • Like
Reactions: louie1961
I recently upgraded one of my servers to a 10gbe sfp+ nic. I essentially did what Udo suggests, with a small twist. I added the card first. I then altered my /etc/network/interfaces file to get the nic to work and test it out. I left the original network connection and IP address intact, and added a second vmbr for the 10gbe nic. In my VMs I switched them from vmbr0 to vmbr1. This way my management interface remained on the 1 gbe network link, and all the VM traffic went over the 10gbe link. I wanted to have two links to the server in case the 10gbe link went down for some reason. I confirmed the VMs were getting the higher bandwidth using iperf tests. Between VMs on the same VLAN, on the same proxmox host, I was achieving a crazy fast number (something like 20K gbps transfer speeds, don't remember exactly). Between a VM's on different hosts, but on the same VLAN, both hosts having 10gbe nics, I was getting close to 9000 gbps (as hoped for), and between VMs on different VLANs, I was getting around 2200gbps. This makes sense since the traffic within the same host on the same VLAN, never really leaves the PCIe bus. And across VLANs it made sense because my pfsense box only has 2.5gbe NICs.
 
  • Like
Reactions: UdoB
I actually didn't need to add another IP address. I just added the new VMBR. Text in black below was already there from my working config, text in red I added to make the 10gbe card work

auto lo
iface lo inet loopback

iface eno1 inet manual

iface ens4f0 inet manual

auto vmbr0
iface vmbr0 inet static
bridge-ports eno1
bridge-stp off
bridge-fd 0
bridge-vlan-aware yes
bridge-vids 2-4092

auto vmbr0.10
iface vmbr0.10 inet static
address 10.10.10.2/24
gateway 10.10.10.1

auto vmbr1
iface vmbr1 inet static
bridge-ports ens4f0
bridge-stp off
bridge-fd 0
bridge-vlan-aware yes
bridge-vids 2-4092
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!