multiple interfaces - how to configure?

chupacabra

New Member
Apr 9, 2023
29
0
1
not certain how to ask this, but here it goes... i have a host that has:
  • a single ethernet port on the motheboard that i use as the management IP and gateway.
  • a bonded two port gigabit card
  • a 10gb card
How do i tell which inteface will be used for certain actions like backup, clone, moving disks, etc.?

right now my configuration looks like this:
Code:
auto lo
iface lo inet loopback

iface enp0s31f6 inet manual
#NiC on motherboard

iface wlo1 inet manual

iface enp1s0 inet manual
        mtu 9000
#10GbE Card Port 0

iface enp1s0d1 inet manual
        mtu 9000
#10GbE Card Port 1

auto enp6s0f0
iface enp6s0f0 inet manual
#1GbE Card Port 0

auto enp6s0f1
iface enp6s0f1 inet manual
#1GbE Card Port 1

auto bond0
iface bond0 inet manual
        bond-slaves enp6s0f0 enp6s0f1
        bond-miimon 100
        bond-mode 802.3ad
#LACP of 1GbE Card Ports

auto vmbr0
iface vmbr0 inet manual
        address 10.10.100.100/24
        gateway 10.10.100.1
        bridge-ports enp0s31f6
        bridge-stp off
        bridge-fd 0
#Bridge on motherboard NIC

auto vmbr1
iface vmbr1 inet static
        bridge-ports bond0
        bridge-stp off
        bridge-fd 0
        bridge-vlan-aware yes
        bridge-vids 2-4094
#Bridge on bond0

auto vmbr2
iface vmbr2 inet static
        address 10.10.50.100/24
        bridge-ports enp1s0d1
        bridge-stp off
        bridge-fd 0
        bridge-vlan-aware yes
        bridge-vids 2-126
        mtu 9000
#Bridge for 10GbE Ports
 
not certain how to ask this, but here it goes... i have a host that has:
  • a single ethernet port on the motheboard that i use as the management IP and gateway.
  • a bonded two port gigabit card
  • a 10gb card
How do i tell which inteface will be used for certain actions like backup, clone, moving disks, etc.?

right now my configuration looks like this:
Code:
auto lo
iface lo inet loopback

iface enp0s31f6 inet manual
#NiC on motherboard

iface wlo1 inet manual

iface enp1s0 inet manual
        mtu 9000
#10GbE Card Port 0

iface enp1s0d1 inet manual
        mtu 9000
#10GbE Card Port 1

auto enp6s0f0
iface enp6s0f0 inet manual
#1GbE Card Port 0

auto enp6s0f1
iface enp6s0f1 inet manual
#1GbE Card Port 1

auto bond0
iface bond0 inet manual
        bond-slaves enp6s0f0 enp6s0f1
        bond-miimon 100
        bond-mode 802.3ad
#LACP of 1GbE Card Ports

auto vmbr0
iface vmbr0 inet manual
        address 10.10.100.100/24
        gateway 10.10.100.1
        bridge-ports enp0s31f6
        bridge-stp off
        bridge-fd 0
#Bridge on motherboard NIC

auto vmbr1
iface vmbr1 inet static
        bridge-ports bond0
        bridge-stp off
        bridge-fd 0
        bridge-vlan-aware yes
        bridge-vids 2-4094
#Bridge on bond0

auto vmbr2
iface vmbr2 inet static
        address 10.10.50.100/24
        bridge-ports enp1s0d1
        bridge-stp off
        bridge-fd 0
        bridge-vlan-aware yes
        bridge-vids 2-126
        mtu 9000
#Bridge for 10GbE Ports
Hi,
you can define a migration network to achieve separation of traffic, see the corresponding subsection in https://pve.proxmox.com/pve-docs/pve-admin-guide.html#sysadmin_network_configuration

As for backups, this will depend on how your backup storage is connected to the PVE cluster and how the data is routed. What storage do you use for backups? Can you post the cat /etc/pve/storage.cfg
 
  • Like
Reactions: chupacabra
Hi,
you can define a migration network to achieve separation of traffic, see the corresponding subsection in https://pve.proxmox.com/pve-docs/pve-admin-guide.html#sysadmin_network_configuration

As for backups, this will depend on how your backup storage is connected to the PVE cluster and how the data is routed. What storage do you use for backups? Can you post the cat /etc/pve/storage.cfg
thanks! i just found what you were talking about regarding the dedicated migration network. I'll modify the /etc/pve/datacenter.cfg to use the bonded gigabit network for dedicated migration.

My storage config is each node has a R10 ZFS pool for local storage, then there are two NAS devices (NAS01 and NAS02). NAS01 is all SSD, NAS02 has a mix of SSD and HDs. here is the storage config
Code:
dir: local
        path /var/lib/vz
        content vztmpl,iso,backup

lvmthin: local-lvm
        thinpool data
        vgname pve
        content rootdir,images

nfs: nas02-backup
        export /volume1/PVE-Backup
        path /mnt/pve/nas02-backup
        server 10.10.10.52
        content iso,vztmpl,backup,rootdir,images,snippets
        prune-backups keep-all=1

nfs: nas02-HDD
        export /volume1/PVE-Store-HDD
        path /mnt/pve/nas02-HDD
        server 10.10.10.52
        content snippets,images,rootdir,backup,vztmpl,iso
        prune-backups keep-all=1

nfs: nas01-SSD
        export /volume1/PVE-Store-SSD
        path /mnt/pve/nas01-SSD
        server 10.10.10.50
        content rootdir,images,snippets,iso,vztmpl,backup
        prune-backups keep-all=1

zfspool: local-SSD-R10
        pool local-SSD-R10
        content rootdir,images
        mountpoint /local-SSD-R10
        nodes pve03,pve02,pve01

nfs: nas01-fast
        export /volume1/SILO-Proxmox
        path /mnt/pve/nas01-fast
        server 10.10.50.50
        content snippets,images,backup,vztmpl,iso,rootdir
        prune-backups keep-all=1
 
thanks! i just found what you were talking about regarding the dedicated migration network. I'll modify the /etc/pve/datacenter.cfg to use the bonded gigabit network for dedicated migration.

My storage config is each node has a R10 ZFS pool for local storage, then there are two NAS devices (NAS01 and NAS02). NAS01 is all SSD, NAS02 has a mix of SSD and HDs. here is the storage config
Code:
dir: local
        path /var/lib/vz
        content vztmpl,iso,backup

lvmthin: local-lvm
        thinpool data
        vgname pve
        content rootdir,images

nfs: nas02-backup
        export /volume1/PVE-Backup
        path /mnt/pve/nas02-backup
        server 10.10.10.52
        content iso,vztmpl,backup,rootdir,images,snippets
        prune-backups keep-all=1

nfs: nas02-HDD
        export /volume1/PVE-Store-HDD
        path /mnt/pve/nas02-HDD
        server 10.10.10.52
        content snippets,images,rootdir,backup,vztmpl,iso
        prune-backups keep-all=1

nfs: nas01-SSD
        export /volume1/PVE-Store-SSD
        path /mnt/pve/nas01-SSD
        server 10.10.10.50
        content rootdir,images,snippets,iso,vztmpl,backup
        prune-backups keep-all=1

zfspool: local-SSD-R10
        pool local-SSD-R10
        content rootdir,images
        mountpoint /local-SSD-R10
        nodes pve03,pve02,pve01

nfs: nas01-fast
        export /volume1/SILO-Proxmox
        path /mnt/pve/nas01-fast
        server 10.10.50.50
        content snippets,images,backup,vztmpl,iso,rootdir
        prune-backups keep-all=1
Okay,
from the outputs you posted it is not clear which interface the backup traffic will use, since I see no config for the 10.10.10.0/24 subnet. I assume that nas02-backup is the storage backend used to store your backups?

So as stated, the backup traffic will be routed according to your network configuration. Make sure that the respective interface is configured for the subnet where your storage server resides.
 
  • Like
Reactions: chupacabra
Ok, i think i get it now. Thank you for the explanation and help. one last question, is it better to define a VLAN on the interface or on the bridge? Also, if i define a VLAN, say VLAN 50 on multiple bridges, is there a way to set priority, or just better to keep VLANs segregated to interfaces.
 
Last edited:
Ok, i think i get it now. Thank you for the explanation and help. one last question, is it better to define a VLAN on the interface or on the bridge? Also, if i define a VLAN, say VLAN 50 on multiple bridges, is there a way to set priority, or just better to keep VLANs segregated to interfaces.
This depends on your use case, but in general it is recommended to tag the interface/bond rather than the bridge. Please let me refer you to section 3.3.8 in the docs for some examples [0].

[0] https://pve.proxmox.com/pve-docs/pve-admin-guide.html#sysadmin_network_configuration
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!