Best Way to Add New Adapter for Use as Management Interface?

Sep 1, 2022
232
44
33
40
Following on from this: https://forum.proxmox.com/threads/changing-proxmox-management-interface.98857/

I've got a PVE node with 2x2.5GbE ports built in, that I'm planning to bond with active load balancing, to dedicate a total of 5 Gb full duplex bandwidth (max 2.5Gb full duplex per connection) to my VMs.

Because I (1) want to practice good network segmentation; and (2) want to isolate my PVE cluster operations from my VM networking to more easily monitor how much traffic is going where/doing what, I want to attach a USB 3.0 2.5GbE adapter and use that as my management port. I don't have separate VLANs/subnets set up yet, but I will in the future, so I wanted to go ahead and get this adapter installed and working. I've figured out how to add the new adapter as a new Linux Bridge (vmbr1), but I'm not sure that's the way to go.

In the alternative, what if I just edited vmbr0 to use only the USB adapter, instead of either of the internal ethernet ports?
So long as I don't change any other settings, this seems to be the best option. The default network settings are all already there and correct. All I need it to do is use a different port. The existing static IP and gateway and everything else would just keep working, right?

(And yes, I know USB network adapters aren't great, especially Realtek-based ones, but Proxmox is Debian, so I'm guessing/hoping the drivers are stable enough?)

Thanks!
 
Hi
in regard to the USB network adapter, what I heard is that they might be a bit flaky, which might be less a driver issue than a hardware one. If you have space for a PCIe card, this would certainly be better.

If you don't plan on using this extra interface for VMs you can just add it to your /etc/network/interfaces:

Code:
...
auto ens21
iface ens21 inet static
    address 192.168.18.21/24
    gateway 192.168.18.1
...
You need to replace it with the right interface name, of course. You would have to configure the cluster to use this as its main interface for cluster traffic.

You can also just replace the bridge-ports ens18 with another interface. But I'm not sure why you would want todo it that way.
 
Hi
in regard to the USB network adapter, what I heard is that they might be a bit flaky, which might be less a driver issue than a hardware one. If you have space for a PCIe card, this would certainly be better.

If you don't plan on using this extra interface for VMs you can just add it to your /etc/network/interfaces:

Code:
...
auto ens21
iface ens21 inet static
    address 192.168.18.21/24
    gateway 192.168.18.1
...
You need to replace it with the right interface name, of course. You would have to configure the cluster to use this as its main interface for cluster traffic.

You can also just replace the bridge-ports ens18 with another interface. But I'm not sure why you would want todo it that way.
Thanks!

I'm using a mini PC, so I don't have any PCIe slots available.* So, I've got:
  1. 2x2.5GbE (Intel) LAN ports on the motherboard; and
  2. USB connectivity sufficient to support a 2.5GbE adapter.
I wanted to bond the 2x2.5GbE ports together and have the VMs use them, to give the VMs 2 separate 2.5GbE pipes to work with, using rock-solid Intel NICs. Particularly as I want to run a couple of Windows/Linux VMs with remote desktops, for both office work and 3D gaming, I want 5GbE of total bandwidth available.

At some point, I'm going to start to work on segmenting my network, and I wanted to have the option of creating a separate management network. To that end, I wanted to go ahead and try to set it up as a management/cluster control port now, even though I don't need it yet (no network segmentation, and I haven't set up the second PVE node yet), so I could test it a bit and make sure it's going to be stable (come up on boot, actually work, etc.).

*As I'm getting better with Proxmox and understanding what I want out of a virtualization server, I'm already finding using a mini PC constraining, but in terms of power usage and actual space in my office, it's still the best option for now. "I don't have enough PCIe slots, again," is the ongoing story of my home server experience. :P
 
I sure for home usage, the USB Ethernet adapter will good enough for now :) for home usage. I'm a bit damaged by enterprise support ;).
 
The USB nic will work as explained. Give it a try, if flaky then you can later just use a separate management vlan over the bonded 2.5g nics and that should not even be noticeable to any other traffic over that bond.
 
Thanks! PVE named the USB adapter "enx-a-bunch-of-numbers-c1," and it's still in the list even when the adapter is disconnected. I'm hoping that its MAC address and it won't change, no matter which USB port I plug it in. Is that accurate, or is it going to change names based on port?

Initially, I was planning to access VM disk images via iSCSI over the bonded 2x 2.5Gbps connection.
If I create a separate management VLAN and use tagging to run it all over the two LACP-bonded ports, am I going to have to worry about over-saturating the connection and seeing performance loss on the VMs?

My initial motivation for adding the USB adapter was to have a third, separate 2.5Gbps connection I could dedicate solely to the management interface, so I could give all the bond's bandwidth to iSCSI VM discs.

Before I go further down the rabbit hole of actually setting up the VLAN tagging, as I've never actually set up a VLAN in my life and I'm still going through the PVE-for-idiots tutorial series I found, is this a valid concern? How likely am I to saturate the LACP 2x2.5Gbps bond with management+VM storage access via iSCSI to my NAS?

(I'm going to run some VMs on local storage, because they're dependent on hardware passthrough specific to this node, but I'd prefer to have as many as possible run via shared storage for reliability/ease of backup. I trust my NAS much more than the SATA consumer SSDs in this node.)
 
I'm hoping that its MAC address and it won't change, no matter which USB port I plug it in. Is that accurate, or is it going to change names based on port?
I don't know the answer to this one as I do not use these. Do you plan on frequently unplugging and plugging back in this device? Hopefully someone or just trial and error answers this for you.

If VLANs are new territory to you and you do not have a VLAN-capable switch already then, forget this for now. Proxmox management network traffic should really be minimal especially if this is not a cluster of proxmox nodes using High availability. You can grow into the VLAN stuff later, if/when necessary.
 
I don't know the answer to this one as I do not use these. Do you plan on frequently unplugging and plugging back in this device? Hopefully someone or just trial and error answers this for you.

I don't plan on unplugging it at all once it's set up unless I'm moving it; just curious how careful I'll have to be about remembering what's plugged in where. On a Raspberry Pi, USB ethernet adapters definitely get different names based on which port they're in. I'll have to experiment. I'll update later with results. ;)

If VLANs are new territory to you and you do not have a VLAN-capable switch already then, forget this for now. Proxmox management network traffic should really be minimal especially if this is not a cluster of proxmox nodes using High availability. You can grow into the VLAN stuff later, if/when necessary.
That's good news. I'm just old enough to remember when 100 megabit LANs were insane luxuries for giant corporations; part of me will always be awed that i get to play with multigigabit equipment. I'm glad to know the 2x2.5Gbps setup will work. ;)

I've got VLAN capable switches; I've just never used that functionality. My WiFi AP probably isn't VLAN capable, so I'll have to upgrade that at some point. This is definitely something I'm saving for later, when I have time to learn and properly segment my network. I think my first project with VLANs will be to build a dedicated 10Gbps storage VLAN for my TrueNAS install and the hosts that are actually fast enough to use that speed.

I do want to get the VMs on to their own VLAN eventually, if only so I can put them on their own subnet(s) since I'm running IPv4 and no IPv6 (my ISP's implementation of IPv6 is completely broken and having it enabled even internally seems to give OPNSense fits).

I do intend to cluster at some point--I've got a 3U server I'm going to put PVE on that will hold, for instance, a ZFS stripped mirror array for mass shared storage. That system doesn't have a GPU; the node I'm building out now does, and its hardware is simpler, too. It's a little mini-PC, and it's been easier to learn Proxmox on more streamlined hardware.

My main goal for clustering was centralizing management of both nodes, and container/VM migration when needed. I can't imagine doing anything that needs high availability, especially since I'd have to set up a whole separate physical box just for the quorum PVE node (is that what's it's called)? Nothing I plan to self host would lead to disaster if it went down for a bit. I'm already pushing the limits of my available space/amps as it is.

Thanks again for all your help.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!