Make new ethernet card the primary for a node

barchetta

New Member
Apr 3, 2022
15
0
1
I added a new 2.5g ethernet card to one of my nodes and Im trying to figure out how to make it the primary of the node as my other node is already 2.5g.

So right now I have a linux Bridge vmbr01 as 192.168.0.11 and its a /24 network.

I got lucky and some sort of linux magic occured when I added the new card because proxmox see's it and I added a bridge to it vmbr03 with a 192.168.0 ip and it works fine.

1. So what would be the proper cadence to make the new controller the primary with the same ip? Id prefer not to do this from the command line as Im pretty weak in linux but if I have to that is fine.. bing is a good friend of mine.

2. Also, do I have to manually modify a network cfg file somewhere or because I added from the UI did proxmox already update it? Im running 8.1.4 fully patched.

3. Im in a cluster so wondering what Id need to do there..

thanks for any help on this.
 
More information

nano /etc/network/interfaces

auto lo
iface lo inet loopback


iface eno0 inet manual

iface eno1 inet manual

iface enp15s0 inet manual

iface enp16s0 inet manual

auto vmbr0
iface vmbr0 inet manual <----- this is my WAN interface which is DHCP (1gbe)
bridge-ports eno0
bridge-stp off
bridge-fd 0
#Direct to WAN NIC


auto vmbr1
iface vmbr1 inet static <-----Id like to disable this interface altogether. (1gbe)
address 192.168.0.11/24
gateway 192.168.0.199
bridge-ports eno1
bridge-stp off
bridge-fd 0
#LAN NIC


auto vmbr2
iface vmbr2 inet manual <------ not in use
bridge-ports enp15s0
bridge-stp off
bridge-fd 0


auto vmbr3
iface vmbr3 inet static <----- Id like to make this the node interface for all vms and also the 2 host cluster (2.5gbe)
address 192.168.0.12/24 <------------ temporary to test card and it works fine
bridge-ports enp16s0
bridge-stp off
bridge-fd 0

Im worried if I change vmbr3 to 192.168.0.11/24 and reboot, Im going to break my 2 host cluster and of course all VMs which use this interface... I know I could modify all vms to use this interface easily.

Help?
 
Last edited:
Im worried if I change vmbr3 to 192.168.0.11/24 and reboot, Im going to break my 2 host cluster and of course all VMs which use this interface...
Two-node clusters are always problematic (search this forum) because both stop working when they can't see the other.
auto vmbr1
iface vmbr1 inet static <-----Id like to disable this interface altogether. (1gbe)
address 192.168.0.11/24
gateway 192.168.0.199
bridge-ports eno1
bridge-stp off
bridge-fd 0
#LAN NIC


auto vmbr3
iface vmbr3 inet static <----- Id like to make this the node interface for all vms and also the 2 host cluster (2.5gbe)
address 192.168.0.12/24 <------------ temporary to test card and it works fine
bridge-ports enp16s0
bridge-stp off
bridge-fd 0
I think you can just remove vmbr3 and use the bride-ports (and address if you prefer) of vmbr3 in vmbr1 (and remove vmbr3) to switch to the other network device (keep the gateway, otherwise traffic won't be routed). Maybe sure you can login to the Proxmox host console (with a physical keyboard and display) if it does not work out.
 
I think what I would do in your position, is change the existing vmbr1 already set for address 192.168.0.11/24 & gateway 192.168.0.199
(which I understand from you, is currently the LAN NIC being used for the cluster), which is currently bridging eno1, and change it to bridge enp16s0.

So in summary what I would do is change the current bridge-ports eno1 under vmbr1 to bridge-ports enp16s0
You'll also need to remove the current vmbr3

This way I believe your cluster and VMs wont be affected by the NIC change.

Disclaimer: I have never done this myself.
 
Last edited:
You folks are wonderful. Im going to go over this all careful in the morning and I will let you all know the outcome.

EDIT: couldnt wait and read all the comments. The cluster as indicated with 2 nodes was a mistake I think. Node 1 seem to reboot which causes all sorts of problems.. its an older HP gen7 tower server and sucks power. As much as I like 2 nodes I may just add some vms to it and power it down and leave it as a stand by.

I bought this as a 2nd node recently and it is an incredible performer. I have 6 security cams running on it with a vm that does AI for detection and it is almost idle whereas the old box was at 30% constantly. Not to mention several other vms which peform way beyond expectations.. Windows 10 runs like its on hardware.

https://www.amazon.com/gp/product/B0CPSFQPV6/ref=ppx_yo_dt_b_search_asin_title?ie=UTF8&psc=1

I'll have to research how to break a cluster. 2 nodes as indicated was a mistake.

EDIT again. One vm which runs a pfsense firewall is what I really wanted HA for.. and it just doesnt work well due to not liking a snapshot in a running state. Pfsense has its own method for HA and syncing to another instance of itself.. I may set that up and try it after I break the cluster.
 
Last edited:
I bought this as a 2nd node recently and it is an incredible performer
Interested to know how much you paid for it. What's you REAL power consumption like?
Looking quickly at specs, it looks like it can take a second NVME. I think it should be possible to set them up in a boot Raid, for extra reliability.
 
Interested to know how much you paid for it. What's you REAL power consumption like?
Looking quickly at specs, it looks like it can take a second NVME. I think it should be possible to set them up in a boot Raid, for extra reliability.
Yes, the 2nd nvme was the ringer for me.. price fluctuates and you can buy barebones but they put kingston in from the factory so its not a bad deal loaded up. I added a 4g nvme to it. I'll put my wattmeter on it when I get a chance.. Im sure it is well under 30 watts.

takes up like 1/1000th the space of my tower server :) I have 2 usb drives hung on it for backups. Anyway, I realize this is a little expensive for home lab use but I wanted the processing power for the future and now.

thanks again for all the input, I wasnt aware it was normal that 2 nodes in a cluster was as bad an idea as it sounds like it is.. I was hoping the 2.5gbe would help out a bit.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!